Numerical libraries are usually much faster than any code you could write. They accelerate tasks as simple as scaling arrays or as complex as finding minima in multidimensional functions. Look in libraries to get cures for the hot spots of your code. Library routines typically receive optimization attention that surpasses what a compiler can achieve when starting from solution-oriented code of the kind you find in Numerical Recipes, or from generic programs that might be available online.

Some of the popular and important software packages available on Frontera at TACC are listed below. We encourage you to check out whatever looks interesting using module spider <package-name> or module help <package-name> to get an idea of what is available. You can also browse or search TACC's online listing of preinstalled software on Frontera, as well as other TACC systems.

Applications Parallel Libs Math Libs Input/Output Diagnostics
NAMD
GROMACS

GAMESS
VASP

NWChem
LAMMPS

OpenFOAM
ANSYS
Abaqus
...
PETSc
Trilinos
Hypre
UMFPACK
MUMPS
SuperLU

ARPACK
SLEPc

METIS
ParMETIS
...
MKL
SuiteSparse

FFTW(2/3)

GSL

Python(2/3)
Rstats

SUNDIALS
...
HDF5
PHDF5

NetCDF
Parallel-NetCDF
...
TAU
PAPI
Remora

Intel Advisor
Intel VTune

DDT

Valgrind
...

The second and third columns are notable because among optimized numerical libraries, the most widely used ones are those supporting linear algebra, especially implementations of BLAS and sparse linear algebra routines, as well as discrete Fourier transforms (DFTs). Other kinds of libraries appear in the table, along with examples of full applications available on Frontera. Why? Because in many cases, when the developers created their specialized software for doing I/O (say), or for enabling cluster-based computing, they also optimized their software for single-core performance.

Although we focus on libraries for C/C++ and Fortran, other languages often have optimized libraries as well, typically with the heavily numerical parts calling compiled C/C++ or Fortran code. For example, TACC's Python and R modules are built to use MKL as their underlying BLAS/LAPACK library. NumPy and routines that depend on it should particularly see a benefit. They can even make use of MKL's parallel capabilities as described above. In general, if you see an optimized C or Fortran library that you wish existed for your favorite language, check around—your language may already have an interface for it! Quite a few languages have the ability to call C (and possibly Fortran) functions directly, for this very reason.

Many of the libraries listed above are open source. They are not necessarily the best performers, but sometimes they include less common procedures which may be just right for your application. In addition, having access to the source code means that you can study or even modify existing methods if you need to. The GNU Scientific Library (GSL) is an example of open-source library with many different functions available. GSL can be helpful while trying to get an initial code version working. Currently, GSL has methods for all of the following:

Complex Numbers
Roots of Polynomials
Special Functions
Vectors and Matrices
Permutations, Combinations
Multisets
Sorting
BLAS Support
Linear Algebra
Eigensystems
Numerical integration
Random Number Generation
Quasi-Random Sequences
Random Distributions
Statistics
Histograms
N-Tuples
Monte Carlo Integration
Simulated Annealing
Minimization
Root-Finding
Least-Squares Fitting
Differential Equations
Interpolation
Numerical Differentiation
Fast Fourier Transform
Chebyshev Approximation
Series Acceleration
Discrete Hankel Transforms
Discrete Wavelet Transforms
Basis Splines
Physical Constants
IEEE Floating-Point

If you ever end up developing some numerical method of sufficiently widespread interest, GSL may give you an avenue for distributing it. It's a good way of putting other eyes on your code, which helps to improve it. Should the method prove to be either too specialized or too large to include in GSL, it could conceivably become an independent project: an example of this is GLPK (GNU Linear Programming Kit). None of this is to say that we all have to be developers. The point of bringing up GSL is that if you simply dig around a little in the open-source world, you might turn up exactly the odd piece of software or library routine you need. And of course you are not limited to the libraries for which TACC provides an environment module: custom libraries can always be built and installed in your home directory (but pay attention to compiler flags and any optimization instructions!).

Finally, when you are writing your own parallel numerical algorithms, you will most likely need an optimized implementation of MPI (Message Passing Interface) or some other communications package that is based upon it. These libraries are a major topic in their own right, and we do not cover them here. We mention only briefly a few useful MPI add-ons one can find: hwloc and netloc (hardware locality and network locality), both of which are distributed with OpenMPI; and BLACS, an abstraction of MPI communication for matrices. OpenMPI is not installed on Frontera, but hwloc can be loaded as a separate module. BLACS is available as a module and is also included with MKL, because it is the communication layer for ScaLAPACK and Cluster FFT.

 
©  |   Cornell University    |   Center for Advanced Computing    |   Copyright Statement    |   Inclusivity Statement