The purpose of this exercise is to ensure that you're actually able to compile and run MPI code. In order to do this, use the "Hello World" code in your preferred language (C or Fortran), and paste it into your favorite editor on Stampede2 or Frontera. Compile it using your compiler of choice, e.g.:

login3$ mpicc -o mpi_silly -O2 hello_mpi.c
Bash

or

login3$ mpif90 hello_mpi.f90 -o mpi_silly
Bash

Move the binary into your home directory, and create a batch script to execute the app using a text editor. The job will just run 16 MPI tasks on a single development node. For details about using sbatch to run jobs on TACC systems, see the Stampede2 Environment or the Frontera Environment CVW roadmap. Your batch script might look the sample below but be sure to fill in your own allocation account number on the line that starts with #SBATCH -A. If you have only one account, you can remove that line from the script entirely.

#!/bin/bash
                          # Use bash shell
#SBATCH -J myMPI          # Job Name
#SBATCH -o myMPI.o%j      # Name of the output file (myMPI.oJobID)
#SBATCH -p development    # Queue name
#SBATCH -t 00:05:00       # Run time (hh:mm:ss) - 5 minutes
#SBATCH -N 1              # Requests 1 MPI node
#SBATCH -n 16             # 16 tasks total
#SBATCH -e myMPI.err%j    # Direct error to the error file
#SBATCH -A TG-TRA120006   # Account number

ibrun mpi_silly
Bash

Go ahead and submit the job, and verify that the contents of the stdout (myMPI.oJobID) look something like this:

TACC: Starting up job 562026
TACC: Starting parallel tasks...
Message from process = 6 : Hello, world
Message from process = 5 : Hello, world
Message from process = 2 : Hello, world
Message from process = 3 : Hello, world
Message from process = 10 : Hello, world
...
Bash

The output simply shows all of the workers giving their id and repeating the message from the manager saying hello. Though this is a simple MPI program with no real purpose, getting MPI to work with a scheduler is a good first step. However, at this point, it is still highly recommended to get a better understanding of and practice with the fundamental two-way communication mechanisms present in MPI, as well as some of the problems that can arise. This information can be found in the CVW MPI Point-to-Point roadmap.

 
© 2025  |   Cornell University    |   Center for Advanced Computing    |   Copyright Statement    |   Access Statement