Roadmap: MPI Point-to-Point
Point-to-point communication encompasses all the methods MPI offers to transmit a message between a pair of processes. MPI features a broad range of point-to-point communication calls; they differ in subtle ways which can affect the performance of your MPI program. This roadmap details and differentiates the various types of point-to-point communication available in MPI-3.0 and discusses when and how to use each method. We will examine blocking as well as nonblocking communication calls and go through some examples using these methods. All exercises and examples are verified to work on Stampede2 and Frontera.
MPI also provides for transmission of messages among groups of processes, which is called collective communication. Collective communication is the subject of a different roadmap.
This is the second of five related roadmaps in the Cornell Virtual Workshop that cover MPI. To see the other roadmaps available, please visit the complete roadmaps list.
Objectives
After you complete this roadmap, you should be able to:
- Explain the four MPI communication modes
- Distinguish between blocking and nonblocking communication
- Identify deadlock avoidance strategies
- Explain the wait, test, and probe functions and their objects
Prerequisites
- Familiarity with HPC paradigms and MPI Basics.
- Experience programming in a high-level language such as Fortran or C would be helpful, but the reader can still follow along to understand the concepts and methodologies.
- A basic familiarity with parallel programming concepts
Requirements
To use MPI on Stampede2 or Frontera:
- A TACC account to login to Stampede2 or Frontera
- A computation allocation for Stampede2 or Frontera