Frontera includes a variety of resources to meet a broad range of computational demands. On top of that, users will submit jobs that differ significantly in size, duration, and priority. Frontera's batch queues—which Slurm refers to as "partitions"—have been configured so that users gain access to their preferred resources fairly and efficiently.

Here is a summary of the characteristics of the different queues on Frontera at the time of this writing (April, 2024):

Properties of different Frontera queues.
Queue Name Max Nodes/Job
(Cores/Job)
Are Jobs
Preemptable?
Max Job
Time
Max Number
of Jobs
Max Total
Nodes
Charge per
Node-Hour
development 40 nodes
(2,240 cores)
no 2 hrs 1 job 40 nodes 1.0 SU
small 2 nodes
(112 cores)
no 48 hrs 20 jobs 24 nodes 1.0 SU
normal 3-512 nodes
(28,672 cores)
no 48 hrs 100 jobs 1836 nodes 1.0 SU
large 513-2048 nodes
(114,688 cores)
no 48 hrs 1 job 4096 nodes 1.0 SU
flex 128 nodes
(7,168 cores)
yes, after 1 hr 48 hrs 15 jobs 6400 nodes 0.8 SU
nvdimm 4 nodes
(448 cores)
no 48 hrs 2 jobs 8 nodes 2.0 SU
rtx 22 nodes
(352 cores)
no 48 hrs 15 jobs 22 nodes 3.0 SU
rtx-dev 2 nodes
(32 cores)
no 2 hrs 1 job 2 nodes 3.0 SU
Info: Maximum jobs per user account

Users are allowed a maximum of 50 running and 200 pending jobs in all queues at one time.

Jobs may be submitted to any of the above queues, except for the "large" queue. For that queue, you need special permission from TACC; they'll want to confirm your readiness to run your application at that scale.

Several of the queues feature specialized hardware. The "nvdimm" queue gives you access to a set of extra-large shared-memory nodes, each having 4 Cascade Lake processors and 2.1 TB of Intel Optane memory. The nodes in the "rtx" and "rtx-dev" queues feature 4 NVIDIA Quadro RTX-5000 GPUs per node. Detailed specifications for all the types of nodes on Frontera can be found on TACC's system architecture page.

Caution: Queue configuration is subject to change.

Be aware that the queue configuration in the table above is subject to change at any time.

To get up-to-date queue configuration, run:

$ qlimits -v
 
©   Cornell University  |  Center for Advanced Computing  |  Copyright Statement  |  Inclusivity Statement