Data Movement
Christopher Cameron, Steve Lantz, Brandon Barker, CAC Staff (original)
Cornell Center for Advanced Computing
Revisions: 5/2022, 1/2014, 2001 (original)
This topic describes MPI's collective data movement routines. This family of routines is a convenient way to handle moving data among processes. Specific functions are available to handle most common data movement needs, including broadcasting, scattering, and gathering data.
MPI provides three categories of collective data-movement routines in which one process either sends to or receives from all processes: broadcast, gather, and scatter. There are also allgather and alltoall routines, which require all processes both to send and receive data. The gather, scatter, allgather, and alltoall routines have variable-data versions. For their variable data ("v") versions, each process can send and/or receive a different number of elements. The list of MPI collective data movement routines are:
- broadcast
- gather, gatherv
- scatter, scatterv
- allgather, allgatherv
- alltoall, alltoallv
Now, let's take a look at the functionality and syntax of these routines.
Objectives
After you complete this segment, you should be able to:
- List the MPI collective data movement routines
- Demonstrate data broadcasting in C and Fortran
- Explain the purposes of the gather and scatter processes
- Explain the purposes of the gatherv and scatterv processes
- Explain the purposes of the allgather and allgatherv processes
- Provide examples of when MPI_Alltoall is useful
Prerequisites
- A basic knowledge of parallel programming and MPI. Information on these prerequisites can be found in other topics (Parallel Programming Concepts and High-Performance Computing, MPI Basics).
- Ability to program in a high-level language such as Fortran or C.