In preparing application programs to run on GPUs, it can be helpful to have an understanding of the main features of GPU hardware design, and to be aware of similarities to and differences from CPUs. This roadmap is intended for those who are relatively new to GPUs or who would just like to learn more about the computer technology that goes into them. No particular parallel programming experience is assumed, and the exercises are based on standard NVIDIA sample programs that are included with the CUDA Toolkit.

Objectives

After you complete this roadmap, you should be able to:

  • List the main architectural features of GPUs and explain how they differ from comparable features of CPUs
  • Discuss the implications for how programs are constructed for General-Purpose computing on GPUs (or GPGPU), and what kinds of software ought to work well on these devices
  • Describe the names, sizes, and speeds of the computational and memory components of specific models of NVIDIA GPU devices
Prerequisites
  • Familiarity with High Performance Computing (HPC) concepts could be helpful, but most terms are explained in context.
  • Parallel Programming Concepts and High-Performance Computing could be considered as a possible companion to this topic, for those who seek to expand their knowledge of parallel computing in general, as well as on GPUs.
Requirements
  • There are no specific requirements for this roadmap; however, access to Frontera may be helpful, or to any computer that hosts an NVIDIA GPU and has the CUDA Toolkit installed.
©   Cornell University  |  Center for Advanced Computing  |  Copyright Statement  |  Inclusivity Statement