Steve Lantz, Peter Vaillancourt
Cornell Center for Advanced Computing

Revisions: 9/2021, 5/2021, 8/2020 (original)

Frontera is the largest academic supercomputer in the world. Located at The University of Texas at Austin's Texas Advanced Computing Center (TACC), Frontera is tailored towards the very largest of scientific computing projects. This portion of the quick-start guide describes how to use the Slurm Workload Manager to access Frontera's compute nodes and run your large-scale jobs.


After you complete this topic, you should be able to:

  • Explain the purpose of the job accounting system
  • Define the main parallel processing methods available in Frontera
  • Describe how to submit a job to a queue
  • Explain how to run a batch job in Frontera
  • Discuss the utility of running an interactive batch job

Frontera is a leadership-class system, so its prospective users are already likely to have a high degree of familiarity and experience with HPC and parallel computing. The pace of this presentation is meant to be relatively brisk, for that reason.

With that being understood, there are no formal prerequisites for this Virtual Workshop topic. The following forms of preparation are recommended:

  • A working knowledge of Linux; otherwise try working through the Linux roadmap first.
  • A basic knowledge of Slurm, for the topics on running and managing jobs; otherwise try working through the Submitting Jobs topic of the Stampede2 Environment roadmap. This may be an especially helpful reference for those new to TACC as well.
©   Cornell University  |  Center for Advanced Computing  |  Copyright Statement  |  Inclusivity Statement