Roadmap: MPI Advanced Topics
This roadmap discusses some of the more advanced functionality available in MPI-3. It will show you how to:
- overlay your data with datatypes to speed message passing
- arrange your MPI processes into virtual groupings and topologies
- use the MPI Tool Information Interface to get and sometimes set control variables and view performance variables
This roadmap does not cover MPI's Parallel I/O features. Although using parallel I/O requires some additional work up front, the pay-off can be well worth it as it offers a single (unified) file for visualization and pre/post-processing. As this is a large topic in itself, we refer the reader to the Parallel I/O roadmap.
All exercises and examples are verified to work on Stampede2 and Frontera.
This is the fourth of five related roadmaps in the Cornell Virtual Workshop that cover MPI. To see the other roadmaps available, please visit the complete roadmaps list.
Objectives
After you complete this roadmap, you should be able to:
- Incorporate more advanced MPI routines into individual code
- Define and use general datatypes
- Create logical groupings and layouts among MPI processes
- Inspect MPI-implementation variables related to control and performance
Prerequisites
- A basic knowledge of parallel programming and MPI. Information on these prerequisites can be found in other topics (Parallel Programming Concepts and High-Performance Computing, MPI Basics).
- Ability to program in a high-level language such as Fortran or C.
- The MPI Collective Communications roadmap logically proceeds this roadmap, but it is not a prerequisite.
Requirements
Requirements include:
- A TACC account to login to Stampede2 or Frontera
- A TACC computation allocation