Vista includes CPU and GPU resources to meet a broad range of computational demands. Accordingly, users will submit jobs that differ significantly in resource type, size, duration, and priority. Vista's batch queues—which Slurm refers to as "partitions"—have been configured so that users gain access to their preferred resources fairly and efficiently.

Here is a summary of the characteristics of the different queues on Vista at the time of this writing (December, 2025):

Properties of different Vista queues.
Queue Name Max Nodes/Job
(Cores/Job)
Max Job
Time
Max Jobs
per User
Max Jobs
Submitted
Max Total
Nodes per User
Charge per
Node-Hour
gg 32 nodes
(4,608 cores)
48 hrs 20 jobs 40 jobs 128 nodes 0.33 SU
gh 64 nodes
(4,608 cores/64 GPUs)
48 hrs 20 jobs 40 jobs 192 nodes 1.0 SU
gh-dev 8 nodes
(576 cores)
2 hrs 1 job 3 jobs 8 nodes 1.0 SU

Jobs may be submitted to any of the above queues. Two of the queues feature GPUs. The nodes in the gh and gh-dev queues feature one H200 GPU per node. Detailed specifications for all the types of nodes on Vista can be found in the Slurm Partitions section of the Vista User Guide.

Caution: Queue configuration is subject to change.

Be aware that the queue configuration in the table above is subject to change at any time.

To get up-to-date queue configuration, run this on a login node:

$ qlimits -v
 
©  |   Cornell University    |   Center for Advanced Computing    |   Copyright Statement    |   Access Statement
CVW material development is supported by NSF OAC awards 1854828, 2321040, 2323116 (UT Austin) and 2005506 (Indiana University)