There are a number of strategies that a programmer can use to avoid deadlocks in his or her code. Here we present four of the most common:

  1. Arrange for a different ordering of calls between tasks
    Have one task post its receive first and the other post its send first. That clearly establishes that the message in one direction will precede the other.
  2. Use nonblocking calls
    Have each task post a nonblocking receive before it does any other communication. This allows each message to be received, no matter what the task is working on when the message arrives or in what order the sends are posted.
  3. Try MPI's coupled Sendrecv methods
    MPI_Sendrecv and MPI_Sendrecv_replace can be elegant solutions. In the _replace version, the system allocates some buffer space to deal with the exchange of messages.
  4. Go with buffered mode
    Use buffered sends so that computation can proceed after copying the message to the user-supplied buffer. This will allow the receives to be executed. This method resolves deadlock problems the same way that strategy 2 does, though with somewhat different performance characteristics.

Often, deadlock is simply a result of not carefully thinking out your parallelization scheme, so solution 1 should always be your first stop. Solutions 2-4 are for when you truly have a situation where the deadlock is unavoidable due to runtime inconsistencies in the synchronization of your code. Thus, when you do run into a deadlock, your first step should not be to make all of your message passing nonblocking. It will be often be easier in the long run to take a look at the message ordering to try and resolve the problem that way first.

 
©   Cornell University  |  Center for Advanced Computing  |  Copyright Statement  |  Inclusivity Statement