MPI is a "standard by consensus," originally designed in an open forum that included hardware vendors, researchers, academics, software library developers, and users representing over 40 organizations. This broad participation in its development ensured MPI's rapid emergence as a widely used standard for writing message-passing programs. MPI is not a true standard; that is, it was not issued by a standards organization such as ANSI or ISO.

The MPI "standard" was introduced by the MPI Forum in May, 1994 and updated in June, 1995. The document that defines it is titled "MPI: A Message-Passing Standard," published by the University of Tennessee and available on the MPI Forum. If you are not already familiar with MPI, you will probably want to make reference to this document for the syntax of MPI routines, which this topic (nor the other topics in the MPI roadmap) will not cover except to illustrate specific cases.

MPI-2.0 was completed in 1997, and MPI-3.0 was finalized in 2012. MPI-3.1 was finalized in 2015, which provides small additions and fixes to MPI-3.0. Given the large base of parallel applications that rely on MPI, and the long history of MPI's stable interface, it is likely that MPI will continue to have its core API unchanged for years to come. MPI 3 is still the most widely used standard, and the Intel MPI libraries installed on Stampede2 and Frontera implement the MPI 3 standard.

Given the ever-changing landscape of HPC architecture, we can continue to expect new extensions supporting new architectures and parallel programming paradigms, as was the case for MPI-2 and MPI-3. MPI-4.0 was released on June 9, 2021. Current efforts focus on MPI 4.1 and MPI 5.0.

MPI-2 Features

MPI-2 produces extensions to the MPI message passing standard. This effort did not change MPI; it extended MPI in the following areas:

  • One-sided communication (get, put)
  • Dynamic process management
  • Parallel I/O
  • Extended collective communication
  • C++, F90 bindings
  • External interfaces
MPI-3 Features

MPI-3 may break backward compatibility in some cases. However, most MPI code will still be compatible with MPI-3, as it is largely composed of extensions to MPI-2:

  • One-sided communication: improved support for shared memory models
  • Collective communication
    • Added nonblocking functions
    • Added neighborhood collectives for specifying process topology
  • MPI_Count datatype added for defining large contiguous derived datatypes
  • MPIT Tool Interface - allows inspection of MPI internal variables
  • Added Fortran 2008 bindings
  • Removed C++ bindings; use C bindings from C++ instead
  • Some advanced functions that were deprecated in MPI-2 are removed or replaced in MPI-3
 
©   Cornell University  |  Center for Advanced Computing  |  Copyright Statement  |  Inclusivity Statement