MPI on HPC Systems
Steve Lantz, Christopher Cameron, Peter Vaillancourt, CAC Staff (original)
Cornell Center for Advanced Computing
Revisions: 8/2025, 5/2022, 3/2019, 6/2017, 2/2001 (original)
This topic is intended to guide researchers getting started with MPI on high performance computing systems, especially on Frontera and Vista.
Objectives
After you complete this topic, you should be able to:
- Demonstrate compiling and running an MPI program on TACC systems
- Explain the role of environment modules in compiling and running MPI programs
- Describe important considerations when using HPC nodes to parallelize code
- Explain why MPI programs cannot be run directly on login nodes
Prerequisites
- A working knowledge of general programming concepts
- Ability to program in a high-level language such as Fortran, C, or C++
- A basic familiarity with parallel programming concepts
©
|
Cornell University
|
Center for Advanced Computing
|
Copyright Statement
|
Access Statement
CVW material development is supported by NSF OAC awards 1854828, 2321040, 2323116 (UT Austin) and 2005506 (Indiana University)
CVW material development is supported by NSF OAC awards 1854828, 2321040, 2323116 (UT Austin) and 2005506 (Indiana University)