Compiling Parallel Programs
Calls to the Message Passing Interface (MPI) are a typical part of a parallel application that is meant to run on a distributed memory resource such as Frontera. As the name suggests, MPI describes a standard interface together with a set of expected message-passing actions. When the code runs, the expected actions are performed by a system-specific MPI implementation, which usually comes in the form of a library.
Two major MPI implementations ("stacks") are supported on Frontera:
- Intel MPI Library
- MVAPICH2 (Ohio State University)
Either of these implementations may be selected through the module system. A default MPI module is generally already loaded when you log in, which is impi/19.0.9
(December, 2022). To see all the available versions, run:
$ module spider impi
$ module spider mvapich2
Regardless of which MPI implementation you choose, the commands you use to compile and link your code are always the same:
Command | Language | Filename Extension(s) | Example |
---|---|---|---|
mpicc | C | .c | mpicc prog.c |
mpicxx | C++ | .C, .cc, .cpp, .cxx | mpicxx prog.cpp |
mpif77 | Fortran 77 | .f, .for, .ftn | mpif77 prog.f |
mpif90 | Fortran 90 | .f90, .fpp | mpif90 prog.f90 |
The mpiXXX
commands are "wrapper" scripts that call the underlying C/C++/Fortran compiler for you, including any compiler options that are needed for properly linking your selected MPI library to your code. If you execute mpicc -show
, you can see the full compile line that is generated by the wrapper.
Performance Options
As we saw with serial programs, it is a good idea to supply architecture-specific optimization options to the compiler to ensure you get the best per-core performance for your code:
$ mpicc -xCORE-AVX512 -O3 mycode.c -o myexe # for Intel
$ mpif90 -xCORE-AVX512 -O3 mycode.f90 -o myexe # for Intel
$ mpicc -march=cascadelake -O3 mycode.c -o myexe # for GCC
$ mpif90 -march=cascadelake -O3 mycode.f90 -o myexe # for GCC
Note that it is necessary to tailor such options according to the identity of the underlying compiler, Intel or GCC.
CVW material development is supported by NSF OAC awards 1854828, 2321040, 2323116 (UT Austin) and 2005506 (Indiana University)