Steve Lantz (original author), Brandon Barker
Cornell Center for Advanced Computing

Revisions: 2/2024, 1/2023, 11/2020, 1/2014, 6/2011 (original)

This topic demonstrates different approaches to using MPI and OpenMP in hybrid programs, focusing on the possible ways of combining multithreading with MPI messaging. The exercise explores the impact of various configurations on a program running on an HPC system.

Objectives

After you complete this topic, you should be able to:

  • Distinguish between single-threaded and multithreaded messaging
  • Identify different approaches for creating hybrid programs
  • Explain consequences of using one or several cores vs. all cores for communication
  • Demonstrate sending and receiving messages using threads
Prerequisites
  • A working knowledge of general programming concepts
  • A working knowledge of Linux; otherwise, try working through the Linux topic first
  • Ability to program in a high-level language such as Fortran or C
  • A basic familiarity with parallel programming concepts
  • A basic working knowledge of MPI at the level of the Cornell Virtual Workshop roadmap on the Message Passing Interface (MPI)
  • A basic working knowledge of OpenMP at the level of the Cornell Virtual Workshop roadmap on OpenMP
 
©   Cornell University  |  Center for Advanced Computing  |  Copyright Statement  |  Inclusivity Statement