Data parallelism is a method of parallel computing where data is distributed across multiple processors that operate on it simultaneously. This approach can be applied to regular data structures like arrays and matrices, with each element being processed in parallel. The speed of execution can be significantly increased compared to sequential execution, but the performance of a data parallel model also depends on the locality of data references, which is influenced by the program's memory accesses and cache size.
Stanford University
Fall 2022
Focused on principles and trade-offs in designing modern parallel computing systems, this course also teaches parallel programming techniques. It is intended for students looking to understand both parallel hardware and software design. Prerequisite knowledge in computer systems is required.
No concepts data
+ 45 more concepts