Optimization via Compilers

Brandon Barker, Steve Lantz
Cornell Center for Advanced Computing

Revisions: 8/2023, 2/2019, 7/2015, 10/2014 (original)

If you have some understanding of what the compiler is trying to do, you can try to help it optimize your code by adopting a straightforward coding style, choosing good compiler options, and presenting it with easy-to-find optimization opportunities.

Objectives

After you complete this segment, you should be able to:

  • Explain how to use compiler flags
  • List compiler options that are especially useful for optimization
  • List the benefits of using the -O2 optimization level
  • Distinguish between the -O2 and -O3 optimization levels
  • Demonstrate optimizing code for a specific processor model
  • List recommended coding practices that will assist the compiler in producing optimized machine instructions from your source code
Prerequisites

The Code Optimization roadmap assumes only that the reader has some basic familiarity with programming in any language. The HPC languages C and Fortran are used in examples. Necessary concepts are introduced as one progresses through the roadmap.

In parallel programming, the key consideration for optimizing large-scale parallel performance is the scalability of a code's algorithm(s). Therefore, readers who are developing parallel applications may want to peruse the roadmap for Scalability first.

More advanced references on performance optimization include the Virtual Workshop roadmaps on Profiling and Debugging and Vectorization. For those interested in programming for advanced HPC architectures, such as TACC's clusters built with Intel processors, the roadmap Case Study: Profiling and Optimization on Advanced Cluster Architectures is relevant. The present roadmap should make a good starting point for diving into any of those.

 
©   Cornell University  |  Center for Advanced Computing  |  Copyright Statement  |  Inclusivity Statement