We've seen that on Frontera you can choose from two different compiler suites and two different MPI stacks, each of which comes in several versions. However, not all of the compilers and MPI implementations are compatible with each other. If you check module spider impi/19.0.9, for example, you will discover that only certain versions of the Intel and GCC compilers are compatible with Intel MPI 19.0.9:

$ module spider impi/19.0.9
---------------------------------------------------------------------------------
impi: impi/19.0.9
---------------------------------------------------------------------------------
Description:
Intel MPI Library (C/C++/Fortran for x86_64)
You will need to load all module(s) on any one of the lines below before the
"impi/19.0.5" module is available to load.
gcc/9.1.0
intel/19.0.5
intel/19.1.1

Obviously, when you compile and link a parallel application, you want to use a valid combination of compiler and MPI library versions. Fortunately, though, Frontera's module system is pretty smart, and it helps keep you out of trouble. If you switch compilers, it will automatically pick a compatible MPI stack for you; if you try to switch MPI versions, it won't let you switch to one that is incompatible with your current compiler. For example:

$ module load impi/18.0.5
Lmod has detected the following error:  These module(s) or extension(s) exist but
cannot be loaded as requested: "impi/18.0.5"
Try: "module spider impi/18.0.5" to see how to load the module(s).

Even better, the MPI libraries on Frontera have Application Binary Interface (ABI) compatibility. Once your code is compiled, you should be able to switch MPI stacks and run your application without recompiling. Still, when you run your application, it's safest to load the same pair of modules that were loaded when you compiled and linked your code; that way you can be sure that everything is consistent.

MPI for Prebuilt Applications and Libraries

Similar considerations apply to the prebuilt applications and libraries that are available through the module system. For example, running module spider gromacs/2019.3 will show you which combinations of compiler and MPI modules will work for running that version of GROMACS. Here again, the module system helps you stay out of trouble.

Most importantly, if you intend to link your own application against an MPI-parallel library such as FFTW or PETSc, it's a good idea to run a command like module spider petsc/3.12 to see the allowed combinations of compilers and MPI stacks, and adjust your build environment accordingly.

In the future, the number of possible compiler/MPI combinations may grow, as there are two other MPI libraries of interest that are not yet supported on Frontera. If you are curious about them, you may want to try module spider to see if the situation has changed:

  • Open MPI (Open Source Message Passing Interface)
  • HPC-X (NVIDIA HPC-X Scalable HPC Software Toolkit)
 
©   Cornell University  |  Center for Advanced Computing  |  Copyright Statement  |  Inclusivity Statement