Program hello_mpi.f90

In case you're interested in learning the details, here are the roles of all the MPI routines in the above code and the types of parameters involved in each call.


Line 7 Initializing an MPI process

MPI_INIT must be the first MPI routine you call in each process. It can only be called once. It establishes an environment necessary for MPI to run. This environment may be customized for any MPI runtime flags provided by the MPI implementation (note that the command line arguments are passed to the C version of this call).


Line 8 Finding the number of processes

MPI_COMM_SIZE returns the number of processes within a communicator. A communicator is MPI's mechanism for establishing separate communication "universes" (more on this later). Our sample program uses the predefined "world communicator" MPI_COMM_WORLD, which includes all your processes. MPI can determine the number of processes because you specify this when you issue the command used to launch MPI programs.


Line 9 Finding a process's rank

Rank is used to specify a particular process. It is an integer in the range 0 through (size-1), where size is the number of processes returned by MPI_Comm_size. MPI_COMM_RANK returns the calling process's rank in the specified communicator.

It's often necessary for a process to know its own rank. For example, you might want to divide up computational work in a loop among all your processes, with each process handling a subset of the original range of the loop. One way to do this is for each process to use its rank to compute its range of loop indices.

When we learn about communicators, you will see that a process may belong to more than one communicator, and may have a different rank in each communicator. For now, assume we are only dealing with the predefined world communicator MPI_COMM_WORLD, which consists of all your processes, as illustrated in the sample program.


Line 15 Sending a message

In MPI, there are many flavors of sends and receives. This is one reason why there are more than 125 routines provided in MPI. In our basic set of six calls, we will look at just one type of send and one type of receive.

MPI_SEND is a blocking send. This means the call does not return control to your program until the data have been copied from the location you specify in the parameter list. Because of this, you can change the data after the call and not affect the original message. (There are non-blocking sends where this is not the case.)


The parameters for MPI_SEND are:

buf
is the beginning of the buffer containing the data to be sent. For Fortran, this is often the name of an array in your program. For C, it is an address.
count
is the number of elements to be sent (not bytes)
datatype
is the type of data
dest
is the rank of the process which is the destination for the message
tag
is an arbitrary number which can be used to distinguish among messages
comm
is the communicator
ierror
is an error return code

Line 20 Receiving a message

Message passing in MPI-1 requires two-way communication. One-sided communication is one of the features added in MPI-2. In two-way communication (point-to-point communication), each time one process sends a message, another process must explicitly receive the message; for every place in your application that you call an MPI send routine, there needs to be a corresponding place where you call an MPI receive routine. Care must be taken to ensure that the send and receive parameters match. This will be discussed in more detail later.

Like MPI_SEND, MPI_RECV is blocking. This means the call does not return control to your program until all the received data have been stored in the variable(s) you specify in the parameter list. Because of this, you can use the data after the call and be certain it is all there. (There are non-blocking receives where this is not the case.)


The parameters for MPI_RECV are:

buf
is the beginning of the buffer where the incoming data are to be stored. For Fortran, this is often the name of an array in your program. For C, it is an address.
count
is the number of elements (not bytes) in your receive buffer
datatype
is the type of data
source
is the rank of the process from which data will be accepted (This can be a wildcard, by specifying the parameter MPI_ANY_SOURCE).
tag
is an arbitrary number which can be used to distinguish among messages
comm
is the communicator
status
is an array or structure of information that is returned. For example, if you specify a wildcard for source or tag, status will tell you the actual rank or tag for the message received
ierror
is an error return code

Line 25 Exiting from MPI

The last call you should make in each process is to MPI_FINALIZE. This helps ensure that MPI exits cleanly, and that nothing is left over on the nodes when you are done. In order for the overall application to complete cleanly, all pending (non-blocking) communication should be completed before calling MPI_FINALIZE.

Note that your code can continue to execute after calling MPI_FINALIZE, but it can longer call MPI routines.

 
©   Cornell University  |  Center for Advanced Computing  |  Copyright Statement  |  Inclusivity Statement