MPI collective communication can be divided into three subsets:

Synchronization
  • Barrier synchronization
Data Movement
  • Broadcast from one member to all other members
  • Gather data from an array spread across processes into one array
  • Scatter data from one member to all members
  • All-to-all exchange of data
Global Computation
  • Global reduction (e.g., sum, min of distributed data elements)
  • Scan across all members of a communicator

There is a separate topic in this roadmap for each of these three types of collective communication.

 
©  |   Cornell University    |   Center for Advanced Computing    |   Copyright Statement    |   Inclusivity Statement