Distributed memory is a type of multiprocessor computer system in which each processor has its own private memory. Processors communicate with one another to access remote data, and the network topology determines how the machine scales. Interconnects between nodes can be implemented using standard or bespoke networks, or dual-ported memories.
UC Berkeley
Spring 2020
The course addresses programming parallel computers to solve complex scientific and engineering problems. It covers an array of parallelization strategies for numerical simulation, data analysis, and machine learning, and provides experience with popular parallel programming tools.
No concepts data
+ 36 more concepts