MPI_Accumulate allows the caller to combine the data moved to the target process with data already present, such as the accumulation of a sum at a target process. The same functionality could be achieved by using MPI_Get to retrieve data (followed by synchronization), performing the sum operation at the caller, then using MPI_Put to send the updated data back to the target process. The MPI_Accumulate function simplifies this messiness and also allows for more flexibility for allowing concurrent operations. Multiple target processes are allowed to perform MPI_Accumulate calls on the same target location, simplifying operations where the order of the operands (such as in a sum) does not matter.

int MPI_Accumulate(void *origin_addr,
                   int origin_count, MPI_Datatype origin_datatype,
			   	   int target_rank, MPI_Aint target_disp,
			 	   int target_count, MPI_Datatype target_datatype,
			 	   MPI_Op op, MPI_Win win)
origin_addr
address of the send buffer
origin_count
the number of entries in the origin buffer
origin_datatype
the datatype of each entry
target_rank
the rank of the target
target_disp
displacement from target window start to target buffer (target offset)
target_count
number of entries in the target buffer
target_datatype
datatype of each entry in the target buffer
op
predefined reduce operation
win
window object

The allowed operations are those provided by MPI_Reduce (max, min, sum, product, and the various and/or/xor operations). User-defined operations are not allowed. The target and origin datatypes can be a predefined datatype or a derived datatype, but all basic components must be of the same type. As with other communication calls, the target buffer must fit in the window. Lastly, a predefined operation MPI_REPLACE is defined, which allows MPI_Accumulate to behave as MPI_Put does (albeit with different concurrency restraints).

 
©   Cornell University  |  Center for Advanced Computing  |  Copyright Statement  |  Inclusivity Statement