Introduction
Steve Lantz (original author), Brandon Barker
Cornell Center for Advanced Computing
Revisions: 2/2024, 1/2023, 11/2020, 1/2014, 6/2011 (original)
Hybrid programming is a coding strategy to create programs that make the best use of multi-node HPC systems. HPC cluster architectures are necessarily characterized by NUMA: Non-Uniform Memory Access. In multi-node systems, global memory is distributed across all the nodes, but shared within nodes (though divided among sockets within a node).
- A multithreaded programming strategy is effective and efficient for shared memory, but a program that relies only on shared memory can only use the resources of a single node.
- A message passing strategy permits processes on different nodes to communicate with one another but cannot take advantage of the efficiency of shared memory.
A hybrid strategy combines multithreading within nodes and message passing between nodes.
Objectives
After you complete this workshop, you should be able to:
- Explain how HPC cluster hardware configuration relates to shared memory and distributed memory
- Compare programming strategies for shared and distributed memory
- Distinguish between using OpenMP vs. MPI when writing parallel programs
- Discuss reasons to use or not use hybrid programming
Prerequisites
- A working knowledge of general programming concepts
- A basic familiarity with parallel programming concepts
©
|
Cornell University
|
Center for Advanced Computing
|
Copyright Statement
|
Access Statement
CVW material development is supported by NSF OAC awards 1854828, 2321040, 2323116 (UT Austin) and 2005506 (Indiana University)
CVW material development is supported by NSF OAC awards 1854828, 2321040, 2323116 (UT Austin) and 2005506 (Indiana University)