The following two pages present an MPI sample program in C and Fortran. On these pages, the lines with MPI routine calls are highlighted and the code is followed by a detailed description of the highlighted routine's purpose and syntax.

In this program, each process initializes itself with MPI (MPI_INIT), determines the number of processes (MPI_COMM_SIZE), and learns its rank (MPI_COMM_RANK). Then one process (rank 0 in this example) sends messages in a loop (MPI_SEND), setting the destination argument to the loop index to ensure that each of the other processes is sent one message. The remaining processes receive one message (MPI_RECV). All processes then print the message and exit from MPI (MPI_FINALIZE).

Consider the following:
  • This program only uses six basic calls to MPI routines.
  • This program is a SPMD code (single program/multiple data). Copies of this program will run on multiple processors.
  • It is worth noting what this program doesn't do: there is no routine that causes multiple copies of the program to begin to run in parallel. This is accomplished when the user (or the batch system) starts the job using mpirun or mpiexec. On Stampede2 and Frontera, the ibrun command takes the place of mpirun.
 
©   Cornell University  |  Center for Advanced Computing  |  Copyright Statement  |  Inclusivity Statement