What is persistent communication?

If a point-to-point message-passing routine is called repeatedly with the same arguments, persistent communication can be used to avoid redundancy in setting up the message each time it is sent. Persistent communication reduces the overhead of communication between the parallel task and the network adapter, but not the overhead between network adapters on different nodes.

One class of program that is appropriate for persistent communication is data decomposition problems in which points are updated based on the values of neighboring points. In this case, for many iterations, tasks send points that border their neighbor's domain, and receive points that border theirs. At each iteration, the location, amount, and type of message data, the destination or source task, and the communicator stay the same. The same message tags can be used, because persistent communication requires the communication to be completed within each loop iteration.

How does persistent communication work?

MPI objects are the internal representations of important entities such as groups, communicators, and datatypes. To increase program safety, programmers cannot directly create, write to, or destroy objects. They are manipulated via handles that are returned from or passed to MPI routines. An example of a handle is MPI_COMM_WORLD, which accesses a communicator object. You have also encountered the request handle returned by the nonblocking communication calls.

The request object accessed by this handle is the internal representation of a send or receive call. It archives all the information contained in the arguments to the message passing call (but not the message data itself), plus the communication mode and the status of the message.

When a program calls a nonblocking message-passing routine such as MPI_Isend, a request object is created, and then the communication is started. These steps are equivalent to two other MPI calls, MPI_Send_init and MPI_Start. When the program calls MPI_Wait, it waits until all necessary local operations have completed, and then frees the memory used to store the request object. This second step equals a call to MPI_Request_free.

When you call a nonblocking message-passing routine many times with the same arguments, you are repeatedly creating the same request object. Similarly, when you verify the completion of these communications, you repeatedly free the request object. The idea behind persistent communication is to allow the request object to persist, and be reused, after the MPI_Wait call. You create the request object once (using MPI_Send_init), start and complete the communication as many times as needed (using MPI_Start and MPI_Wait), and then free the request object once (using MPI_Request_free).

 
©   Cornell University  |  Center for Advanced Computing  |  Copyright Statement  |  Inclusivity Statement