At the highest level of thread support, MPI_THREAD_MULTIPLE, all the threads in thread-parallel regions of a hybrid code are able to send messages to one another, simultaneously. However, some special techniques may be required to make sure that these messages get handled correctly. The code excerpt below gives one example of how to do this. It will be expanded upon in the exercise to follow.

Example of MPI_THREAD_MULTIPLE: all threads are making calls to MPI_Send and MPI_Recv
With the MPI_THREAD_MULTIPLE level of thread support,
all threads may call MPI concurrently, no restrictions
Example Fortran code with multiple threads and MPI calls. Refer to the Code tab for details.
Example Fortran code with multiple threads and MPI calls
Notice that the send and receive calls in lines 13 and 16 communicate between rank 0 and rank 1, and that the threads use ithread as a tag differentiate their messages.

Notes on this program:

  • Thread ID as well as rank can be used in communication
  • Technique is illustrated in the above multithreaded "ping" (send/receive) example
  • A total of nthreads messages are sent/received between two MPI ranks
  • From the sending process: each thread sends a message tagged with its thread number
  • Recall that an MPI message can be received only by an MPI_Recv with a matching tag
  • At the receiving process: each thread looks for a message tagged with its thread number
  • Therefore, communication occurs pairwise, between threads whose numbers match

The above example illustrates the use of point-to-point communication among threads. But special attention must be given if multiple threads will be using MPI collective communications. According to the MPI standard: "In these situations, it is the user's responsibility to ensure that the same communicator is not used concurrently by two different collective communication calls at the same process." This means that MPI_Comm_dup() must be invoked to make copies of MPI_COMM_WORLD if more than one thread will be using that communicator concurrently in the same process.

 
©  |   Cornell University    |   Center for Advanced Computing    |   Copyright Statement    |   Inclusivity Statement