Summary

Sourcehttp://www.gromacs.org/

License:  LGPL

Path:  /usr/bin (resp. under the mpi-specific path)

Documentationhttp://www.gromacs.org/Documentation

Citationshttp://www.gromacs.org/Gromacs_papers

GROMACS is a versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles. It is primarily designed for biochemical molecules like proteins, lipids and nucleic acids that have a lot of complicated bonded interactions, but since GROMACS is extremely fast at calculating the nonbonded interactions (that usually dominate simulations) many groups are also using it for research on non-biological systems, e.g. polymers.

Using gromacs

gromacs is installed in different variants. For the "custom installations on Maxwell" see gromacs on Maxwell.

The generally available version of gromacs is installed in the system path and doesn't require any special setup.:

# get some information about the setup
[max]$ gmx mdrun --version
GROMACS:    gmx mdrun, VERSION 2018.8
<SNIP>


gromacs comes with support for openmpi and mpich MPI implementations. To use these you'd need to initialize the environment use the module command:

[max]% module avail
[max]% module load mpi/openmpi-x86_64   # correspondingly for mpich.
[max]% which mdrun_openmpi
/usr/lib64/openmpi/bin/mdrun_openmpi

See below for an example using openmpi.

Running gromacs batch-jobs on Maxwell

A single-node job without MPI

#!/bin/bash
#SBATCH --nodes=1
#SBATCH --partition=maxcpu
#SBATCH --time=0-01:00:00
unset LD_PRELOAD
# Sample: /beegfs/desy/group/it/Benchmarks/gromacs/sample-gromacs-single-node-20k.sh

base=/beegfs/desy/user/$USER/GROMACS
project=20k
rm -rf $base/$project

mkdir -p $base/$project
pushd $base/$project

# fetch the benchmark sample:
cp /beegfs/desy/group/it/Benchmarks/gromacs/HECBioSim/${project}-atoms/benchmark.tpr .

# gromacs doesn't benefit form hyperthreaded cores, so just use phyical cores:
nt=$(( $(nproc) / 2 ))

INPUT=benchmark.tpr
OUTPUT=benchmark.log

STARTTIME=$(date +%s)
gmx mdrun -nt $nt -s ${INPUT} -g ${OUTPUT}
ENDTIME=$(date +%s)

ELAPSED=$(($ENDTIME - $STARTTIME))

x=$(grep Performance $OUTPUT)
c=$(grep "model name" /proc/cpuinfo | head -1 | cut -d: -f2)

cat <<EOF
Number of threads: $nt
Time elapsed:      $ELAPSED seconds
NodeList:          $SLURM_JOB_NODELIST
Processor:         $c

                  (ns/day)      (hour/ns)
$x
EOF

popd
exit


################################################################
# output:
Number of threads: 20
Time elapsed:      165 seconds
NodeList:          max-wn044
Processor:         Intel(R) Xeon(R) CPU E5-2640 v4 @ 2.40GHz
                  (ns/day)    (hour/ns)
Performance:       52.321        0.459



A multi-node job with MPI

#!/bin/bash
#SBATCH --nodes=2
#SBATCH --partition=maxcpu
#SBATCH --time=0-01:00:00
# sample: /beegfs/desy/group/it/Benchmarks/gromacs/sample-gromacs-mpi-20k.sh

source /etc/profile.d/modules.sh
module load mpi/openmpi-x86_64

base=/beegfs/desy/user/$USER/GROMACS
project=20k
rm -rf $base/$project

mkdir -p $base/$project
pushd $base/$project

# fetch the benchmark sample:
cp /beegfs/desy/group/it/Benchmarks/gromacs/HECBioSim/${project}-atoms/benchmark.tpr .

# gromacs doesn't benefit form hyperthreaded cores, so just use phyical cores.
# gromacs recommends between 1 and 6 threads per mpi rank, so as an example we use 2 threads per rank
# which means we have  mpi-ranks= "total number of cores" / (2*2) 

total_cores=$(scontrol show job $SLURM_JOB_ID | grep NumCPUs | cut -d= -f3 | awk '{print $1}')
np=$(( $total_cores / 4 ))
nt=2

INPUT=benchmark.tpr
OUTPUT=benchmark.log
STARTTIME=$(date +%s)


# the actual gromacs run
# just use as many MPI processes as nodes. For each node use nt threads:
mpirun --map-by node -np $np `which mdrun_openmpi` -ntomp $nt  -s ${INPUT} -g ${OUTPUT}

ENDTIME=$(date +%s)
ELAPSED=$(($ENDTIME - $STARTTIME))

x=$(grep Performance $OUTPUT)

cat <<EOF
Time elapsed:           $ELAPSED seconds
NodeList:               $SLURM_JOB_NODELIST
Number of nodes:        $SLURM_JOB_NUM_NODES
Number of mpi ranks:    $np
Number of threads/rank: $nt

                  (ns/day)    (hour/ns)
$x
EOF

popd
exit

################################################################
# output:

Time elapsed:           101
NodeList:               max-wn[012,061]
Number of nodes:        2
Number of mpi ranks:    20
Number of threads/rank: 2

                  (ns/day)    (hour/ns)
Performance:       86.329        0.278