Sometimes, a program can be parallelized by taking advantage of language features, extensions, or libraries that are already capable of parallel computation. Some higher-level programming languages support distributed arrays. Languages that support distributed arrays take care of scattering and gathering blocks of arrays as necessary. Also, some computational libraries, such as ScaLAPACK, FFTW, PETSc, and the Intel oneAPI Math Kernel Library (oneMKL), offer distributed memory parallel algorithms. When programs use an API to access a library's distributed data structures and compatible algorithms, the program logic can be expressed as a sequence of function calls, much like a serial program. When this code is compiled and executed on a cluster, the underlying libraries will manage any parallel computation.

If possible, use existing libraries and software features that support parallel computation. These tools are designed to handle common parallelization needs in a general way, and they are subject to extensive testing. They are designed to be easier to use and faster to implement than custom solutions.

 
©   Cornell University  |  Center for Advanced Computing  |  Copyright Statement  |  Inclusivity Statement