Obviously, MPI_MAX isn't the only operation that may be useful in a global computation. Take a look at either of the sample codes below. Once the question mark is removed, either the C or Fortran version of the program will compile correctly. But to make the computed number match the answer calculated by the formula, you will need to substitute a different operation in the call to MPI_Allreduce. Can you deduce the correct operation?

MPI_Allreduce Exercise
! Replace MPI_MAX? with the correct operation
program allreduce
  use mpi_f08
  double precision :: val, sum
  icomm = MPI_COMM_WORLD
  knt = 1
  call mpi_init(ierr)
  call mpi_comm_rank(icomm,mype,ierr)
  call mpi_comm_size(icomm,npes,ierr)

  val = dble(mype)
  call mpi_allreduce(val,sum,knt,MPI_REAL8,MPI_MAX?,icomm,ierr)

  ncalc = ((npes-1)*npes)/2
  print '(" pe#",i5," sum =",f5.0, " calc. sum =",i5)', &
          mype, sum, ncalc
  call mpi_finalize(ierr)
end program

If you don't recognize the formula, you can always try changing the MPI operation and testing the program with different numbers of processes until the answer always comes out right. Or, you can peek at the correct operation by hovering here.

On Stampede2 or Frontera, copy and paste the sample code into a command line editor, then compile and run it using an interactive session. The Stampede2 and Frontera CVW Topics explain these steps in more detail.

  • Compile using a command like those shown below:
    % mpif90 allreduce.f90 -o allreduce_f
    % mpicc allreduce.c -o allreduce_c
  • Start an interactive session using:
    % idev -N 1 -n8
  • Run the code using the ibrun MPI launcher wrapper. Try varying the number of processes from 2 to 8:
    % ibrun -np 8 allreduce_c
 
©   Cornell University  |  Center for Advanced Computing  |  Copyright Statement  |  Inclusivity Statement