When you are ready to run jobs on Frontera, you have two basic ways to gain access to the compute nodes.

  1. You can submit a batch job, using Slurm's sbatch command. When resources become available, your job runs unattended, executing the commands that you provided ahead of time in a batch script.
  2. You can start an interactive session through a special type of batch job, using idev or srun. Assuming the requested resources are available, you are quickly granted a command prompt on a compute node. (Otherwise, you wait.)

A certain amount of interaction is possible with batch jobs as well. While your batch job is running, you can ssh to any of the allocated nodes and execute commands there. This can be a good way to monitor job progress, for example. (Note: the Slurm environment will not exist in this "side door" type of interactive shell, so don't execute commands, such as ibrun, that require Slurm batch support.)

sinfo

Naturally, if you're intending to run a job, it's good to know whether the nodes you want are available. Slurm's sinfo command can be used to determine queue status:

$ sinfo -S+P -o "%18P %8a %20F"
PARTITION          AVAIL    NODES(A/I/O/T)      
development*       up       237/112/11/360      
flex               up       7663/556/37/8256    
large              up       7450/525/29/8004    
normal             up       7450/525/29/8004    
nvdimm             up       8/3/5/16            
rtx                up       73/10/1/84          
rtx-dev            up       1/5/0/6             
small              up       175/0/5/180

In the above output, the AVAIL column tells you whether the queue is up or down, while the column under NODES(A/I/O/T) lists the number of nodes in the queue that are Allocated, Idle, and Other, together with the Total. Note that a formatting option has been added to sinfo to make the output more compact and legible. To save typing, you may want to create an alias (nodes, e.g.) for the command as shown.

$ alias nodes='sinfo -S+P -o "%18P %8a %20F"'
 
©  |   Cornell University    |   Center for Advanced Computing    |   Copyright Statement    |   Inclusivity Statement