A distributed memory program expects each task to have a different virtual address space and therefore expects that tasks cannot coordinate or communicate via shared memory.

In distributed memory programming, message passing is the typical way of communicating between tasks and coordinating their activities. Although many message-passing interfaces have been developed, most clusters now support implementations of MPI. The Data Communication section of this topic introduces a few examples of message passing via MPI. Other CVW topics offer more detailed descriptions of the MPI interface and demonstrate how to use MPI for HPC tasks.

Some high-level languages include constructs for automatically distributing blocks of arrays across multiple tasks. Programming using distributed arrays is called data parallel programming because each task is working on a different section of an array of data. The data parallel constructs offered in high-level languages are often implemented using MPI, so it is sometimes possible to take advantage of MPI functionality without using the MPI interface directly.

In contrast with shared memory programming, distributed memory programming is less dependent on the underlying hardware details. Distributed memory programs are not limited to running on separate cluster nodes; if a node has more than one core and enough physical memory, it can execute multiple instances of a distributed memory program. It is not generally useful to assign more tasks to a node than the number of cores it has because, in a well-tuned HPC application, cores should be running non-stop without the need for slicing/multiplexing.

 
©   Cornell University  |  Center for Advanced Computing  |  Copyright Statement  |  Inclusivity Statement