openmpi usually won't be in PATH or LD_LIBRARY_PATH. To initialize the environment use the module command:

[elx]$ module avail                     # show available module; if module is undefined (bash) source /etc/profile.d/modules.sh 
[el6]$ module load openmpi-x86_64       # initialize environment on SL6-nodes
[el7]$ module load mpi/openmpi-x86_64   # initialize environment on EL7-nodes
[elx]$ which mpicc
/usr/lib64/openmpi/bin/mpicc

openmpi will work on HPC and BIRD. openmpi will primarily use infiniband connections, but will automatically fall back to ethernet in case infiniband is not available (like on BIRD). 

openmpi on Maxwell

There are several openmpi Installation available on Maxwell on top of the RedHat supplied stack:

[@max-wgse001 ~]$ xwhich mpicc | grep openmpi
   ... module load maxwell openmpi/3.1.2; which mpicc: /software/openmpi/3.1.2/bin/mpicc
   ... module load maxwell openmpi/4.0.0; which mpicc: /software/openmpi/4.0.0/bin/mpicc
   ... module load maxwell openmpi/3.1.0; which mpicc: /software/openmpi/3.1.0/bin/mpicc
   ... module load maxwell openmpi/2.0; which mpicc: /software/openmpi/2.0/bin/mpicc
   ... module load maxwell openmpi/3.1.3; which mpicc: /software/openmpi/3.1.3/bin/mpicc
   ... module load maxwell openmpi/4.0.3; which mpicc: /software/openmpi/4.0.3/bin/mpicc
   ... module load maxwell openmpi/4.0.2; which mpicc: /software/openmpi/4.0.2/bin/mpicc
   ... module load maxwell openmpi/3.1.6; which mpicc: /software/openmpi/3.1.6/bin/mpicc
   ... module load maxwell openmpi/4.0.1; which mpicc: /software/openmpi/4.0.1/bin/mpicc

Some of the installation might become obsolete and might be removed. Base versions of modules and installations will always be available, pointing to the most recent, stable version:

[@max-wgse001 ~]$ module load maxwell openmpi/3 
[@max-wgse001 openmpi]$ which mpicc
/software/openmpi/3/bin/mpicc

# where /software/openmpi/3 is just a symbolic link to 3.1.x
# Likewise for openmpi/2, openmpi/4

Most of the openmpi variants are compiled with CUDA support. Running MPI applications on nodes without GPU (and hence CUDA) will throw a warning about cuda-libraries not being available. To suppress this and possibly other warnings use the MCAs:

export MCA=' --mca btl_openib_warn_no_device_params_found 0 --mca pml ucx --mca mpi_cuda_support 0 '
mpirun $MCA -np 8 ...

GCC 9

For some applications, the standard GCC compiler set is simply too old. For such cases, we provide on Maxwell a number of additional installations based on GCC 8 and 9. GCC 9 should generally be the better option.

@max~$ module load maxwell gcc/9.3
# loading the gcc/9.3 module make additional openmpi installations available and compiled with gcc 9.3.
# these installation also use a more recent UCX version which overcome the Connect-X6 bug mentioned below
@max:~$ module avail openmpi
----------------------------------------------------------------- /software/gcc/9.3/etc/modulefiles ------------------------------------------------------------------
openmpi/3     openmpi/3.1   openmpi/3.1.6 openmpi/4     openmpi/4.0   openmpi/4.0.4


Connect-X6

A number of new Maxwell nodes are equipped with Connect-X6 Infiniband interfaces. Nodes include max-exfl[189-260] and max-cssb[019-026]. The Connect-X6 adapters currently require special openmpi-Installations (at least when using UCX). Only versions 3.1.6+ and 4.0.3+ combined with UCX 1.8 promise to function properly. See for example

Simple Test

Get an mpi-example like hellohybrid from https://rcc.uchicago.edu/docs/running-jobs/hybrid/index.html.  For example:

module load maxwell openmpi/3
wget https://rcc.uchicago.edu/docs/_downloads/hellohybrid.c
mpicc -fopenmp -o hellohybrid hellohybrid.c

export MCA=' --mca btl_openib_warn_no_device_params_found 0 --mca pml ucx --mca mpi_cuda_support 0 '
mpirun $MCA -np 8 ./hellohybrid

# should work smoothly with openmpi 3.1.6+, 4.0.3+


Building OpenMPI with UCX

Build for openmpi v4.0.3 (and v3.1.6) look like the following (note that the '--without-verbs' appears to be essential):

oversion=4.0.3
ucxversion=1.8.0
cudaversion=10.1

cd /tmp
tar xf  /software/openmpi/src/ucx-$ucxversion.tar.gz
cd ucx-$ucxversion
./configure --prefix=/software/ucx/$ucxversion --with-cuda=/usr/local/cuda-$cudaversion --enable-mt --enable-devel-headers --enable-examples 
make -j 4
make install


tar xf /software/openmpi/src/openmpi-${oversion}.tar.bz2
cd  openmpi-${oversion}
./configure --prefix=/software/openmpi/${oversion} --disable-silent-rules --with-ucx=/software/ucx/${ucxversion} --with-hwloc=/usr --with-cuda=/usr/local/cuda-$cudaversion --without-sge --with-libltdl=/usr --without-verbs CC=gcc CXX=g++ FC=gfortran LDFLAGS='-Wl,-z,noexecstack' 
make -j 4
make install

Documentation:

openmpi has (like mpich, mvapich) an exhaustive number of options to tune compilation and runtime behavior. Please consult the openMPI documentations and FAQ: 

Note:

  • RedHat has updated the MPI stack for mpich, mvapich, openmpi. For openmpi it meant upgrading from versions 1.5.x to 1.8.x. 
  • OpenMPI has an unusual release cycle distinguishing between super-stable releases and feature releases guaranteeing ABI stability within a set of stable/feature releases. So any application compiled with OpenMPI 1.4.x will work with OpenMPI 1.5.x. However, ABI breaks between stable releases. So there is a fair chance that openmpi-1.5.x applications will not work with openmpi-1.8.
  • OpenMPI-1.5 has been retired and was a feature rather than a stable release. Therefore no compatibility-packages are available for openmpi-1.5, but only for v1.4 (see compat-openmpi).
  • OpenMPI 1.8 is the first MPI-3 compliant stable release. 
  • This applies only for EL6.6 or higher.