This exercise explores running R as a slurm batch job on Stampede2 or Frontera. You will need to create two scripts: an R script that contains all of the commands you would have typed at the command line and a job script with Slurm commands plus commands for invoking R in batch mode.

Following is a sample R script. Name it sample_script_to_batch.R. Note there are two ways to generate output illustrated in this example.

The following is a bare-bones Slurm script for running the R script in batch (in a single task on one node). Name the batch script RTestScript.sh. Be sure to fill out the allocation_code. You can delete the line referring to allocation code if you only have one allocation.

Submit the batch job. You should get a similar response to what follows (depending on which system you are using):

login2.stampede2(31)$ sbatch RTestScript.sh

-----------------------------------------------------------------
          Welcome to the Stampede 2 Supercomputer
-----------------------------------------------------------------

No reservation for this job
--> Verifying valid submit host (login2)...OK
--> Verifying valid jobname...OK
--> Enforcing max jobs per user...OK
--> Verifying availability of your home dir (/home1/03143/)...OK
--> Verifying availability of your work dir (/work/03143//stampede2)...OK
--> Verifying availability of your scratch dir (/scratch/03143/)...OK
--> Verifying valid ssh keys...OK
--> Verifying access to desired queue (development)...OK
--> Verifying job request is within current queue limits...OK
--> Checking available allocation (TG-STA160002)...OK
Submitted batch job 282306

You should get two files once the job has finished: ( in this example) and or whatever name you gave your outfile. The file slurm-jobID.out should be empty unless there are errors in your batch script. If you didn't give the outfile a name, it will be called .

Examine output.txt:

Note that both methods of showing output (unlike in the earlier example using , where was required) resulted in it being written to the output file.

 
©  |   Cornell University    |   Center for Advanced Computing    |   Copyright Statement    |   Access Statement