The following is a list of MPI features categorized by which Cornell Virtual Workshop roadmap covers the concepts. If you complete all five CVW MPI roadmaps, then all of the features listed will be covered in detail. For now, this list will give you some idea of what MPI was designed to do.

Introduction to MPI (this roadmap)
Deterministic assignment of tasks to machines
Specify where each code section will run, in order to efficiently execute jobs that do equal work (you can do the load balancing)
Simple, unique rank for each process
Identify output from each of many instances with unique file names
Separate name and address spaces
Work with distributed data without having to change variable names
Guaranteed in-order delivery of messages
Safely assume format and content of received messages
High performance
Establish communication patterns at the outset -- sockets are set at MPI_INIT, so connections are already established
MPI Point-to-Point Communication
Blocking communication
Halt a program's execution until a message has been sent or received (Ssend)
Non-blocking communication
Perform other, unrelated work after a message has been queued (Isend)
MPI Collective Communication
Collective communication functions
Make use of Broadcast, Scatter/gather, Scan/Reduction, All-to-all, Barrier
MPI One-Sided Communications
One-Sided Calls
Allow a single MPI process to initiate sending or receiving in another process, potentially reducing synchronization overhead.
Remote Direct Memory Access
Use RMA to directly access the memory of other MPI processes through low-latency communication paths like Intel Omni-Path or InfiniBand.
MPI Advanced Topics
Virtual topologies
Define communicators on a subset of processes, or define private communicators for use by libraries
Standard interface to utility functions
Assure portability through a standard interface to, e.g., timer, hostname
Error handling
Provide error-handling functions for communication failures
Persistent communications
Pre-set, then use and reuse communication paths, all implicitly non-blocking
 
©   Cornell University  |  Center for Advanced Computing  |  Copyright Statement  |  Inclusivity Statement