Before RMA communication can take place, all processes must agree on the areas in their local memories that remote processes can operate on. This window is created by a collective call that is executed by all processes in the given communicator.

int MPI_Win_create(void *base, MPI_Aint size, int disp_unit, 
		MPI_Info info, MPI_Comm comm, MPI_Win *win)

base
initial address of window (offset)
size
size of the window in bytes; note MPI_Aint is a C type for storing address values, and as of MPI-3, users should take care to check for integer overflow when working with this type
disp_unit
local unit size for displacements, in bytes (to allow for correct array indices in heterogeneous environments)
info
handle to an MPI_Info object
comm
handle to a communicator
win
handle to the window created by the call (out)

Simply creating the window does not automatically make its data accessible to other processes; the window must also be "opened" by a synchronization call. This starts an RMA epoch, so named because a process can only access a window through a single epoch at a given point in one's code; this will be explained more later.

A few special points about RMA windows are worth mentioning.

  1. Each process may specify completely different locations, sizes, displacement units, and info arguments. However, the user should be aware that an RMA communication call can cause problems if the data do not fit inside a remote window. Before using this feature, consider if the benefits for your particular application are worth the extra complexity.
  2. The same region of memory may appear in multiple windows that have been defined for a given process. Again, the user should be aware that concurrent communications to overlapping windows are likely to cause problems.
  3. Performance may be improved by ensuring that the window boundaries align with natural boundaries such as word or cache-line boundaries.
  4. A window can be created in any part of the process memory; however, on some systems, the performance of windows in memory allocated by MPI_Alloc_mem will be better.
 
©   Cornell University  |  Center for Advanced Computing  |  Copyright Statement  |  Inclusivity Statement