Parallel computing

Parallel computing

Parallel computing involves carrying out multiple calculations or processes simultaneously, which can help solve large problems more efficiently. It has become increasingly important in computer architecture due to limitations on frequency scaling and concerns about power consumption. Parallel computing can be achieved through different forms of parallelism, and it is closely related to concurrent computing but distinct from it. The level of hardware support for parallelism can vary, and specialized architectures may be used for specific tasks. However, writing explicitly parallel algorithms can be challenging due to potential software bugs and obstacles in communication and synchronization between subtasks. The speed-up of a program through parallelization is limited by Amdahl's law.

1 courses cover this concept

CS 149 PARALLEL COMPUTING

Stanford University

Fall 2022

Focused on principles and trade-offs in designing modern parallel computing systems, this course also teaches parallel programming techniques. It is intended for students looking to understand both parallel hardware and software design. Prerequisite knowledge in computer systems is required.

No concepts data

+ 45 more concepts