Roadmap: Message Passing Interface (MPI)
This roadmap will introduce you to the Message Passing Interface (MPI), a specification that is the de facto standard for distributed memory computing. MPI consists of a collection of routines for exchanging data among the processes in a distributed memory parallel program and synchronizing their work.
We will provide an overview of MPI's functionality and key concepts, and give you a first look at how to use MPI on Stampede2 and Frontera. This topic is intended to guide researchers getting started with MPI on high performance computing systems. All exercises and examples are verified to work on Stampede2 and Frontera.
This is the first of five MPI related roadmaps in the Cornell Virtual Workshop that cover MPI. To see the other roadmaps available, please visit the complete CVW roadmaps list.
Objectives
After you complete this roadmap, you should be able to:
- Explain the concept of MPI
- Write a simple MPI program
- Run a simple MPI program on Stampede2
- Compile MPI code
- Run parallel jobs on Stampede2 using Slurm
- Select the appropriate MPI implementation for your platform and work
Prerequisites
- A working knowledge of general programming concepts
- Ability to program in a high-level language such as Fortran, C, or C++
- A basic familiarity with parallel programming concepts