In this exercise we will use Frontera's default setup to compile a couple of C codes that have been parallelized with OpenMP and MPI. We will try switching compilers very briefly at the end.

The first code is a simple, pure OpenMP code, ompsumtest.c. This code can be compiled as if it were a purely serial code, or it can be compiled so that the OpenMP pragmas take effect. To verify this claim, try compiling it both ways on a login node, with the Intel compiler:


$ icc ompsumtest.c -o sumtest
$ icc -qopenmp ompsumtest.c -o ompsumtest

You will see a few warnings when -qopenmp is absent, which can be suppressed with the -Wno-unknown-pragmas option. When -qopenmp is present, you can confirm that the for-loops were parallelized with OpenMP as expected, by examining the appropriate optimization report:


$ icc -qopenmp -qopt-report=1 -qopt-report-phase=openmp ompsumtest.c -o ompsumtest
$ cat ompsumtest.optrpt

The Cascade Lake processors in the login nodes are identical to those in the compute nodes, so the appropriate optimizations are produced if the -xHost architecture is specified:


$ icc  -O3 -xHost -qopenmp -qopt-report=1 ompsumtest.c -o ompsumtest
$ less ompsumtest.optrpt

In this case, we didn't limit the optimization report to just the OpenMP phase, so the report will contain numerous details about vectorization, code generation, etc.

The second code is a hybrid OpenMP/MPI code, ompipi.c. It must be compiled with a suitable MPI wrapper, which is mpicc, no matter whether the Intel or GNU C compiler is used:


$ mpicc --version
$ mpicc -O3 -xHost -qopenmp ompipi.c -o ompipi
$ ml gcc
$ mpicc --version
$ mpicc -O3 -march=native -fopenmp ompipi.c -o ompipi

It turns out that the mpicc command accepts gcc flags even if the Intel module is loaded, because the Intel compiler is able to parse gcc flags:


$ ml intel
$ mpicc --version
$ mpicc -O3 -march=native -fopenmp ompipi.c -o ompipi

Notice that when a new compiler module is loaded, the Lmod system is smart enough to unload the old compiler module and reload the Intel MPI module to maintain compatibility.

 
©  |   Cornell University    |   Center for Advanced Computing    |   Copyright Statement    |   Inclusivity Statement