Planning for Parallel

Steve Lantz, Andrew Dolgert (original)
Cornell Center for Advanced Computing

Revisions: 10/2023, 10/2014, 3/2012 (original)

Generally it is best to consider questions of scalability in the first stages of design, at the time when the top level of the code is constructed and the main algorithms and data structures are selected. In these early stages, it is good too to be aware of the factors that tend to inhibit scalability, such as an excessive reliance on global communication and synchronization.

Objectives

After you complete this segment, you should be able to:

  • List each stage of the scientific application building process
  • Identify the decisions or actions influencing software performance
  • Describe the factors to consider when selecting algorithms and sub-algorithms
  • List the characteristics of an efficient and scalable algorithm
  • Explain how performance is affected by choices in implementing the algorithm
  • Explain the disadvantages of frequent synchronization and global communication among parallel processes
  • Define the term load balancing
  • Define the term "noise" and its potential consequences in the context of parallel computing
Prerequisites

There are no prerequisites for this topic. However, before you attempt to optimize your code for any particular HPC system, you should become familiar with the specific architecture and hardware characteristics of that system. You should also be aware of profiling and debugging tools that are available for it.

In the case of Frontera at TACC, good references would be the Virtual Workshop roadmaps on Parallel Programming Concepts and Vectorization. But it is not necessary to complete the roadmaps on those topics before diving into this one.

 
©   Cornell University  |  Center for Advanced Computing  |  Copyright Statement  |  Inclusivity Statement