Point-to-point communication encompasses all the methods MPI offers to transmit a message between a pair of processes. MPI features a broad range of point-to-point communication calls; they differ in subtle ways which can affect the performance of your MPI program. This roadmap details and differentiates the various types of point-to-point communication available in MPI-3.0 and discusses when and how to use each method. We will examine blocking as well as nonblocking communication calls and go through some examples using these methods. All exercises and examples are verified to work on Frontera and Vista.

MPI also provides for transmission of messages among groups of processes, which is called collective communication. Collective communication is the subject of a different roadmap.

This is the second of five related roadmaps in the Cornell Virtual Workshop that cover MPI. To see the other roadmaps available, please visit the complete roadmaps list.

Objectives

After you complete this roadmap, you should be able to:

  • Explain the four MPI communication modes
  • Distinguish between blocking and nonblocking communication
  • Identify deadlock avoidance strategies
  • Explain the wait, test, and probe functions and their objects
Prerequisites
Requirements

The examples and exercises in this roadmap are designed and tested to run on Frontera or Vista. However, they should also run on other HPC systems with little to no modifications. To use Frontera or Vista, you need:

  • A TACC account to login to Frontera or Vista
  • A compute time allocation for Frontera or Vista
©  |   Cornell University    |   Center for Advanced Computing    |   Copyright Statement    |   Access Statement
CVW material development is supported by NSF OAC awards 1854828, 2321040, 2323116 (UT Austin) and 2005506 (Indiana University)