High performance computing (HPC) focuses on any computing that challenges the resources at hand. Though HPC is commonly associated with scientific research at supercomputing facilities, it also encompasses general applications requiring a significant amount of data processing or fast response times. Its principles apply equally well to sluggish payroll programs and unruly Matlab scripts.

Some HPC applications have already moved beyond the realm of scientific research. For example:

  • The U.S. Department of Energy (DOE) operates successful programs that pair businesses with national laboratories to solve pressing material engineering and manufacturing challenges.
  • Finance companies use in-house supercomputers and cloud-based HPC service providers for risk management, portfolio optimization, and pricing.
  • Massively multiplayer online games, such as World of Warcraft, use federated central servers to coordinate the online virtual lives of every person playing the game.
  • OpenAI, the deep learning company that develops the GPT-3 language model, trains its models on a supercomputer system with 285,000 CPU cores and 10,000 GPUs.

Another kind of large-scale computing is High Throughput Computing (HTC). In this case, the object is to assemble a very large resource that is typically not well-connected and use it for a computation in which each quantum of calculation is done independently and doesn't form a blocker for other processing. This works especially well if the computation involves intense processing on relatively small chunks of data that can be efficiently distributed to computation nodes.

The most famous example of HTC is SETI@home, which encouraged personal computer users to donate free computing time to researchers. A contemporary project, Folding@home, achieves 50,000+ teraflops using donated compute power from tens of thousands of personal computers. In fact, Folding@home became the world's first exascale computer, achieving over 1018 floating-point operations per second during April of 2020 (Zimmerman et al. 2020).

Parallel programming is necessary to use modern resources effectively. Classic supercomputing applications typically run for such a long time that two days spent optimizing the code are regained when a calculation completes in 14 days instead of 16. And with the advent of multicore chips in workstations, the relationships among internal resources have become sufficiently complex that good programs for desktops often use the same design principles as programs for supercomputers. While this topic focuses on supercomputing applications, parallel programming principles are widely applicable.

 
©  |   Cornell University    |   Center for Advanced Computing    |   Copyright Statement    |   Inclusivity Statement