Splitting Communicators
MPI_Comm_split is the simplest way to split a communicator into multiple, non-overlapping communicators.
int MPI_Comm_split(MPI_Comm comm, int color, int key,
MPI_Comm *newcomm)
Argument list for MPI_Comm_split:
- comm
- communicator to split
- color
- all processes with the same color go in the same communicator
- key
- value to determine ordering in the result communicator (optional)
- *newcomm
- resulting communicator
For example, we can imagine the processes to be laid out in a 2D grid, in which the numbering (rank) increases first left-to-right, then moves to the next row, etc.
MPI_Comm_rank(MPI_COMM_WORLD,&rank);
myrow = (int)(rank/ncol); //answer is 0, 1,... nrows-1
MPI_Comm_split(MPI_COMM_WORLD,myrow,rank,row_comm);
MPI-3 introduced a variant of this command that allows the implementation to automatically perform
some of the group-division based on the process's "type" (split_type
):
int MPI_Comm_split_type(MPI_Comm comm, int split_type, int key,
MPI_Info info, MPI_Comm *newcomm)
This is demonstrated
by the one type currently defined for split_type
by the MPI-3 specification: MPI_COMM_TYPE_SHARED.
When this is the type specified, the communicator will be split into subcommunicators whose processes
can create shared memory regions. This is highly useful when you want to have a hierarchy of communication where
communicators are created based on node-level memory locality, even if a shared memory region is not needed.
Further division of the processes into groups may be achieved based on the key
argument which would
allow for the creation of multiple groups per shared-memory node (one group per key per node).
Since implementations may define their own types or make use of the info
argument, it is worthwhile to
consult your MPI implementation's documentation if other functionality is desired beyond what can be
achieved with MPI_COMM_TYPE_SHARED and per-process key specification. MPI_INFO_NULL can be used for info
in place of any actual information.