One-sided communication methods were added to MPI as a part of the MPI-2 improvements and were greatly expanded in MPI-3 by including support for shared memory windows, windows with dynamically attached memory, request-based communication calls, and more window locking mechanisms. On Frontera and Vista, the one-sided communication methods implemented in the Intel MPI and MVAPICH2 libraries use the Remote Direct Memory Access (RMA or RDMA) functionality provided by low-latency interconnect fabrics such as Omni-Path and InfiniBand. In this roadmap, we will introduce the various components of MPI RMA and how to use them.

All exercises and examples are verified to work on Frontera and Vista.

This is the fifth of five related roadmaps in the Cornell Virtual Workshop that cover MPI. To see the other roadmaps available, please visit the complete roadmaps list.

Objectives

After you complete this roadmap, you should be able to:

  • Identify basic concepts of one-sided communication in MPI programming
  • Define the term one-sided communication
  • Explain how RMA improves data transfer events
  • Identify the three RMA communication calls supported by MPI
  • Define the target_rank and target_datatype identifiers
  • Demonstrate synchronizing MPI processes
  • List important considerations when using RMA calls
  • Demonstrate use of dynamically allocated memory
  • Explain the need for creating windows with shared memory
Prerequisites
Requirements

The examples and exercises in this roadmap are designed and tested to run on Frontera or Vista. However, they should also run on other HPC systems with little to no modifications. To use Frontera or Vista, you need:

  • A TACC account to login to Frontera or Vista
  • A compute time allocation for Frontera or Vista
©  |   Cornell University    |   Center for Advanced Computing    |   Copyright Statement    |   Access Statement
CVW material development is supported by NSF OAC awards 1854828, 2321040, 2323116 (UT Austin) and 2005506 (Indiana University)