Steve Lantz
Cornell Center for Advanced Computing

Revisions: 5/2023, 12/2021, 5/2021 (original)

The hardware design for graphics processing units (GPUs) is optimized for highly parallel processing. As a result, application programs for GPUs rely on programming models like NVIDIA CUDA that can differ substantially from traditional serial programming models based on CPUs. Still, one may ask: is the world of GPUs really so different from the world of CPUs? If one looks more closely, one discovers that many aspects of a GPU's architecture resemble those of modern CPUs, and the differences are at least partly a matter of terminology.

Objectives

After you complete this topic, you should be able to:

  • List the main architectural features of GPUs and explain how they differ from comparable features of CPUs
  • Discuss the implications for how programs are constructed for General-Purpose computing on GPUs (or GPGPU), and what kinds of software ought to work well on these devices
  • Name and describe the computational components of NVIDIA GPU devices, and their associated software constructs
Prerequisites
  • Familiarity with High Performance Computing (HPC) concepts could be helpful, but most terms are explained in context.
  • Parallel Programming Concepts and High-Performance Computing could be considered as a possible companion to this topic, for those who seek to expand their knowledge of parallel computing in general, as well as on GPUs.
 
©  |   Cornell University    |   Center for Advanced Computing    |   Copyright Statement    |   Inclusivity Statement