Compiling Parallel Programs
Calls to the Message Passing Interface (MPI) are a typical part of a parallel application that is meant to run on a distributed memory resource such as Vista. As the name suggests, MPI describes a standard interface together with a set of expected message-passing actions. When the code runs, the expected actions are performed by a system-specific MPI implementation, which usually comes in the form of a library.
Two major MPI implementations ("stacks") are supported on Vista:
- Open MPI
- mvapich-plus (Ohio State University)
Either of these implementations may be selected through the module system. A default MPI module is generally already loaded when you log in, which is openmpi/5.0.5 (December, 2025). To see all the available versions, run:
Regardless of which MPI implementation you choose, the commands you use to compile and link your code are always the same:
| Command | Language | Filename Extension(s) | Example |
|---|---|---|---|
| mpicc | C | .c | mpicc prog.c |
| mpicxx | C++ | .C, .cc, .cpp, .cxx | mpicxx prog.cpp |
| mpif77 | Fortran 77 | .f, .for, .ftn | mpif77 prog.f |
| mpif90 | Fortran 90 | .f90, .fpp | mpif90 prog.f90 |
The mpiXXX commands are "wrapper" scripts that call the underlying C/C++/Fortran compiler for you, including any compiler options that are needed for properly linking your selected MPI library to your code. If you execute mpicc -show, you can see the full compile line that is generated by the wrapper.
Performance Options
As we saw with serial programs, it is a good idea to supply architecture-specific optimization options to the compiler to ensure you get the best per-core performance for your code:
CVW material development is supported by NSF OAC awards 1854828, 2321040, 2323116 (UT Austin) and 2005506 (Indiana University)