The easiest way to implement R in parallel is to use some or all of the core on one node. Obviously, this means you can only scale to the number of cores on a node; memory availability can also become an issue. Due to hyperthreading on Stampede2, the detectCores function detects the maximum number of tasks allowable, not the number of physical cores (as we shall see, and as it mentioned in the Stampede2 documentation on this issue, we will not wish to use all the available cores/hardware threads):

It is fairly straightforward to do "multicore" processing in R. First you need to invoke the R parallel library and then you replace functions with their multicore equivalents wherever possible. Coding changes are minimal.

The following is a trivial example using instead of (, which stands for "list apply", will perform an operation on each element of a list or vector). You can specify a number of "cores" up to the maximum, if performance or memory is an issue with mclappy.

Note that performance with 272 "cores" is worse than running with one! Even 32 "cores" is better than 68 (both are better than on one). You may need to experiment to find the best specification.

There is a completely different way to take advantage of multiple cores on a node. R can be built and linked with Intel MKL, which provides high-performance implementations of BLAS and LAPACK routines. If your R program makes heavy use of matrix computations, MKL offers you a way to multithread such work through OpenMP. Often you don't need to change your code at all to allow MKL to take advantage of multiple cores in the hardware; it can be as easy as setting the environment variable. Note that on Stampede2 and Frontera, will have a default value, which may vary between the compute nodes accessed via batch or Visualization Portal, so it is safer to set it explicitly, yourself; you will probably have to run some tests of this, but beware that MPI tasks (such as those outlined later) could then each spawn additional threads via MKL.

 
©  |   Cornell University    |   Center for Advanced Computing    |   Copyright Statement    |   Inclusivity Statement