Taking the hello_mpi.c example from earlier in the topic, we can modify it to use MPI_Ibcast in place of MPI_Bcast. We know that all message passing from the MPI_Bcast call will complete when the function returns, but this is not the case with MPI_Ibcast. To see what effect might happen, we can modify the buffer in the root process immediately after a call to MPI_Ibcast:

Start an interactive session on one node with 16 cores:

$ idev -N 1 -n 16

After compiling the example code, run the program using ibrun.

$ ibrun -np 16 <your_program>

You should see output similar but not necessarily identical to the following:

Clearly, we have induced some chaotic behavior by the second invocation of strcpy.

Tip: Clean up after nonblocking calls!

Every nonblocking call in MPI should be completed with a matching call to MPI_Wait, MPI_Test, or MPI_Request_free. See nonblocking completion in the MPI Point-to-point roadmap for more details.

Using the same MPI_Wait routine that is used in nonblocking point-to-point communication can fix this issue:

This time we should see more sensible output:

Message from process 0 : What will happen?
Message from process 1 : Hello, world
Message from process 8 : Hello, world
Message from process 2 : Hello, world
Message from process 9 : Hello, world
Message from process 10 : Hello, world
Message from process 11 : Hello, world
Message from process 3 : Hello, world
Message from process 4 : Hello, world
Message from process 5 : Hello, world
Message from process 6 : Hello, world
Message from process 7 : Hello, world
Message from process 15 : Hello, world
Message from process 14 : Hello, world
Message from process 12 : Hello, world
 
©   Cornell University  |  Center for Advanced Computing  |  Copyright Statement  |  Inclusivity Statement