Parallel jobs that are to be run on Stampede2 or Frontera must be submitted through Slurm, which is the batch scheduling system. It is not permitted to run MPI programs directly on login nodes. This measure is a protection against unintended consequences that could affect login node performance. When submitting jobs to Slurm using a job script, the name of the script file should be the first argument to sbatch. Comments at the head of the script provide Slurm with the parameters for the job. The purpose of many of these parameters can be inferred simply by examining the following sample script:

The key command is the final one, ibrun, which is a TACC-specific front end to the mpirun command that actually initiates the MPI processes on multiple cores. This is illustrated in the figure below:

Diagram showing terminal connection to login node via ssh and login node interaction with Slurm manager through sbatch and related commands. The slurm manager takes care of scheduling, managing resource use and distributing work to the compute nodes.
Diagram showing terminal connection to login node via ssh and login node interaction with Slurm manager through sbatch and related commands. The slurm manager takes care of scheduling, managing resource use and distributing work to the compute nodes.
Login node interaction with Slurm manager.

For more a more in-depth explanation about how to use Slurm to run your MPI jobs on TACC systems, refer to the appropriate user guide: Stampede3 User Guide or Frontera User Guide. The Cornell Virtual Workshop roadmaps for the Stampede2 Environment and the Frontera Environment offer complete introductions to using compilers, libraries, and batch jobs on these systems.

To submit the above batch file, if it is saved in mpi_batch.sh, run the following on Stampede2:

The output and error files will be generated in your current directory. Note that the MPI environment in the batch job should match the MPI environment used to compile the code. For more information, see the Libraries on Stampede2 page in Code Optimization.

 
©  |   Cornell University    |   Center for Advanced Computing    |   Copyright Statement    |   Inclusivity Statement