Exercise - Compiling
In this exercise we will use Frontera's default setup to compile a couple of C codes that have been parallelized with OpenMP and MPI. We will try switching compilers very briefly at the end.
The first code is a simple, pure OpenMP code, ompsumtest.c. This code can be compiled as if it were a purely serial code, or it can be compiled so that the OpenMP pragmas take effect. To verify this claim, try compiling it both ways on a login node, with the Intel compiler:
You will see a few warnings when -qopenmp
is absent, which can be suppressed with the -Wno-unknown-pragmas
option. When -qopenmp
is present, you can confirm that the for-loops were parallelized with OpenMP as expected, by examining the appropriate optimization report:
The Cascade Lake processors in the login nodes are identical to those in the compute nodes, so the appropriate optimizations are produced if the -xHost
architecture is specified:
In this case, we didn't limit the optimization report to just the OpenMP phase, so the report will contain numerous details about vectorization, code generation, etc.
The second code is a hybrid OpenMP/MPI code, ompipi.c. It must be compiled with a suitable MPI wrapper, which is mpicc
, no matter whether the Intel or GNU C compiler is used:
It turns out that the mpicc command accepts gcc flags even if the Intel module is loaded, because the Intel compiler is able to parse gcc flags:
Notice that when a new compiler module is loaded, the Lmod system is smart enough to unload the old compiler module and reload the Intel MPI module to maintain compatibility.