The last type of synchronization uses a lock-request paradigm that should feel similar to those familiar with mutexes or other concurrent operations. The idea is to permit access to a target by only one process at a time, so that other processes cannot interfere while communication is in progress. To achieve this, the caller (origin process) obtains a lock (which may be shared or exclusive) to the window on a specific target, which allows for communication calls to proceed.

While this seems analogous to getting a file or class lock, it is important to note that the lock call may not block while the lock is actually being acquired. The programmer is only assured that it will be acquired before the succeeding RMA communication calls complete.

Once the caller has completed making communication calls, unlock is called to release the lock. Similar to the other synchronization methods, unlock only returns when all communication calls have completed.

int MPI_Win_lock(int lock_type, int rank, int assert, MPI_Win win)
/* access AND exposure epoch (communication calls) */
int MPI_Win_unlock(int rank, MPI_Win win)
lock_type
whether other processes may access the target window at the same time (MPI_LOCK_SHARED vs. MPI_LOCK_EXCLUSIVE)
rank
rank of target window to acquire lock for
assert
assertions for optimization
win
the window context for the calls

Note that a window cannot be exposed via an exposure epoch and be locked concurrently. Once the target process grants a lock on a given window, it is considered to have entered an exposure epoch for that window. If programmers mix synchronization methods within a program, they may need to include explicit synchronization code to prevent lock requests on already-exposed windows.

A simple example is provided below, demonstrating that the target process must do nothing other than create a window for operation. This is why lock-unlock is referred to as passive target synchronization.


    //Start up MPI...
    MPI_Win win;

    if (rank == 0) {
        /* Rank 0 will be the caller, so null window */
        MPI_Win_create(NULL,0,1,
            MPI_INFO_NULL,MPI_COMM_WORLD,&win);
        /* Request lock of process 1 */
        MPI_Win_lock(MPI_LOCK_SHARED,1,0,win);
        MPI_Put(buf,1,MPI_INT,1,0,1,MPI_INT,win);
        /* Block until put succeeds */
        MPI_Win_unlock(1,win);
        /* Free the window */
        MPI_Win_free(&win);
    }
    else {
        /* Rank 1 is the target process */
        MPI_Win_create(buf,2*sizeof(int),sizeof(int),
            MPI_INFO_NULL, MPI_COMM_WORLD, &win);
        /* No sync calls on the target process! */
        MPI_Win_free(&win);
    }

Note that MPI-3 introduced lock_all and unlock_all variants to simultaneously control access to all processes associated with a window.

Request-based one-sided communication operations

An additional level of granularity beyond the post-start-complete-wait methodology introduced previously was added in MPI-3 in the form of request-based RMA communication operations that accept a request handle, allowing for the various test and wait functions described in the MPI Point-to-Point topic to be used with these operations. With the help of MPI_Win_lock_all, this can allow computation to be interleaved with communication calls accessing non-overlapping segments of a window's buffer (hint: MPI_Waitany). The signatures for the request-based communication functions are shown below:


int MPI_Rput(const void *origin_addr, int origin_count, \
     MPI_Datatype origin_datatype, int target_rank, \
     MPI_Aint target_disp, int target_count, \
     MPI_Datatype target_datatype, MPI_Win win, \
     MPI_Request *request)

int MPI_Rget(void *origin_addr, int origin_count, \
     MPI_Datatype origin_datatype, int target_rank, \
     MPI_Aint target_disp, int target_count, \
     MPI_Datatype target_datatype, MPI_Win win, \
     MPI_Request *request)

int MPI_Raccumulate( \
     const void *origin_addr, int origin_count, \
     MPI_Datatype origin_datatype, int target_rank, \
     MPI_Aint target_disp, int target_count, \
     MPI_Datatype target_datatype, MPI_Op op, MPI_Win win, \
     MPI_Request *request)	 

int MPI_Rget_accumulate( \
     const void *origin_addr, int origin_count, \
     MPI_Datatype origin_datatype, void *result_addr, \
     int result_count, MPI_Datatype result_datatype, \
     int target_rank, MPI_Aint target_disp, int target_count, \
     MPI_Datatype target_datatype, MPI_Op op, MPI_Win win, \
     MPI_Request *request) 

 
©   Cornell University  |  Center for Advanced Computing  |  Copyright Statement  |  Inclusivity Statement