In the context of HPC, a task is any distinct stream of instructions issued by parallel code. Depending on the parallelization strategy, a task can correspond to a single thread or a process. Conceptually, a task represents both a unit of work and the worker itself; different programming paradigms emphasize different aspects of this duality. A common parallel programming paradigm conceptualizes some tasks as parallel workers that perform computation and others as managers that coordinate input and results. Our card sorting example used both managers and workers.

In some parallel programs, tasks correspond to threads running within a single shared process. Because these tasks share the same process, they can communicate using the shared memory allocated to the process. The OpenMP API facilitates this kind of shared memory parallelism.

Two office workers sharing a cubicle.
In a shared memory design, multiple workers occupy the same cubicle (process) simultaneously.

Tasks can also be different processes, such that the available memory is distributed across tasks rather than shared. Because they do not rely on shared memory, these tasks can run on different computers, allowing the parallel workflow to use more memory and CPU cores. Although it is more complicated, distributed memory parallelism is more scalable than shared memory parallelism. MPI is a common interface standard to facilitate communication among tasks that do not share memory. An MPI task corresponds to a process and the associated main thread. Each MPI task is identifiable via a unique numerical ID, called a rank. If an MPI task creates additional threads within the process — using OpenMP or some other mechanism — these threads can be thought of as subtasks.

Two office cubicles, each occupied by one worker.
In a distributed memory design, parallel workers are assigned to different cubicles (processes).

Generally, the goal of a parallel program is to create various tasks that execute independent computations while coordinating communication among the tasks. The program might do this using MPI, OpenMP, or some other means. It can even use a combination of these, e.g., MPI tasks may create subtasks using OpenMP. In our cubicles and workers analogy, using MPI and OpenMP together would create multiple cubicles with multiple workers in each cubicle.

 
©  |   Cornell University    |   Center for Advanced Computing    |   Copyright Statement    |   Inclusivity Statement