Summary
Source: http://lammps.sandia.gov/
License: GNU Public License Version 2 (GPLv2)
Path: see below
Documentation: http://lammps.sandia.gov/doc/Manual.html
LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator.LAMMPS has potentials for solid-state materials (metals, semiconductors) and soft matter (biomolecules, polymers) and coarse-grained or mesoscopic systems. It can be used to model atoms or, more generically, as a parallel particle simulator at the atomic, meso, or continuum scale.
Using lammps
We provide two lammps installations:
- lammps v20190807 from the EPEL repository.
- lammps v2021-07 (and possibly newer) installed in /software/lammps
Local lammps
More recent version(s) of lammps are installed under /software/lammps. In this case, only a openmpi-4 compiled version is supplied. The configuration is explained further below
# initialize [max]% module load maxwell lammps # or lammps/2021-07 for a specific version # a sample run mpirun -N $(( $(nproc) / 2)) -mca pml ucx lmp -in in.lj.hex > heat.log 2>&1 # for quick benchmarks see below
Note: only use physical cores ('$(($(proc)/2))'), using all cores will make runs slower.
EPEL lammps
The lammps installation available from the EPEL repository is a bit rusty and lacking a number of features, but might still be sufficient.
Essential variables like LAMMPS_POTENTIALS are defined in /etc/profile.d/lammps.(c)sh, which are usually sourced at login.
Currently, three variants of lammps are being installed, serial, openmpi, mpich:
# serial lammps does not need any setup: [max]% which lmp /usr/bin/lmp # mpich compiled lammps [max]% xwhich lmp_mpich Provided by module(s) ... module load mpi/mpich-x86_64; which lmp_mpich: /usr/lib64/mpich/bin/lmp_mpich # openmpi compiled lammps [max]% xwhich lmp_openmpi Provided by module(s) ... module load mpi/openmpi-x86_64; which lmp_openmpi: /usr/lib64/openmpi/bin/lmp_openmpi
lammps in batch
A simple batch-script could look like this (using the heat sample coming with lammps):
#!/bin/bash #SBATCH --partition=short #SBATCH --time=0-04:00:00 #SBATCH --nodes=1 #SBATCH --job-name=lammps.heat #SBATCH --output=heat.out unset LD_PRELOAD source /etc/profile.d/modules.sh module purge module load maxwell lammps procs=$(( $(nproc) / 2 )) mpirun -N $procs --mca pml ucx --mca opal_warn_on_missing_libcuda 0 /software/lammps/2021-07/bin/lmp -in in.lj.hex > heat.1.$procs.log 2>&1 el=$(grep "Total wall time" heat.$procs.log| awk '{print $4}') echo "Number of cores used: $procs Elapsed time: $el"
Quick benchmarks
Using the heat sample on AMD EPYC 7402 (48/96 cores) the runtime for lammps looks like this:
nodes | number of cores per node (out of 48/96 physical/total) | |||||||
---|---|---|---|---|---|---|---|---|
1 | 2 | 4 | 8 | 16 | 32 | 48 | 96 | |
1 | 33:00 | 17:59 | 10:07 | 05:50 | 03:25 | 02:36 | 02:34 | 03:50 |
2 | 18:20 | 09:50 | 05:57 | 03:30 | 02:37 | 02:37 | 02:48 | 05:30 |
4 | 09:55 | 05:45 | 03:42 | 02:33 | 02:23 | 03:17 | 04:02 | 09:10 |
At least for this particular sample it appears that using only physical cores and only a single machine is your best option.
lammps configuration
Local lammps
wget https://github.com/lammps/lammps/archive/refs/tags/patch_28Jul2021.tar.gz -O lammps-patch_28Jul2021.tar.gz tar xf lammps-patch_28Jul2021.tar.gz cd lammps-patch_28Jul2021/ module load maxwell gcc/9.3 openmpi/4.0.4 cmake --loglevel=verbose -DCMAKE_INSTALL_PREFIX=/software/lammps/2021-07 -DPKG_EXTRA-COMPUTE=yes -DPKG_VORONOI=yes \ -DPKG_PYTHON=ON -DPKG_KIM=yes -DPKG_COMPRESS=yes -DPKG_BODY=yes -DPKG_KOKKOS=yes -DPKG_MANYBODY=yes -DPKG_MOLECULE=yes \ -DDOWNLOAD_KIM=yes -DPKG_ATC=yes -DPKG_H5MD=yes -DPKG_EXTRA-MOLECULE=yes -DPKG_EXTRA-PAIR=yes -DPKG_SPIN=yes \ -DPKG_SPH=yes -DPKG_QMMM=yes -DPKG_REACTION=yes -DPKG_PLUMED=yes -DPKG_DIPOLE=yes -DPKG_DIELECTRIC=yes \ -DPKG_DIFFRACTION=yes -DPKG_KSPACE=yes -DPKG_RIGID=yes ../cmake make -j 8 make install export LAMMPS_POTENTIALS=/software/lammps/2021-07/share/lammps/potentials # part of the modulefile
EPEL lammps
# from the SPEC-file cmake3 -C ../cmake/presets/all_on.cmake -C ../cmake/presets/nolib.cmake \ -DBUILD_LIB=ON -DPKG_PYTHON=ON -DPKG_EXTRA-COMPUTE=ON -DPKG_VORONOI=ON -DPKG_USER-ATC=ON \ -DPKG_USER-H5MD=ON -DPKG_KIM=ON -DBUILD_TOOLS=ON -DENABLE_TESTING=ON -DPKG_MOLECULE=yes \ -DPYTHON_INSTDIR=%{python3_sitelib} -DCMAKE_INSTALL_SYSCONFDIR=/etc -DPKG_GPU=ON -DGPU_API=OpenCL \ -DBUILD_OMP=ON -DCMAKE_INSTALL_BINDIR=${MPI_BIN:-%{_bindir}} -DCMAKE_INSTALL_LIBDIR=${MPI_LIB:-%{_libdir}} \ -DLAMMPS_MACHINE="${MPI_SUFFIX#_}" -DLAMMPS_LIB_SUFFIX="${MPI_SUFFIX#_}" -DCMAKE_INSTALL_MANDIR=${MPI_MAN:-%{_mandir}} \ ${mpi:+-DBUILD_MPI=ON -DPKG_MPIIO=ON \ -DCMAKE_EXE_LINKER_FLAGS="%{__global_ldflags} -Wl,-rpath -Wl,${MPI_LIB} -Wl,--enable-new-dtags" \ -DCMAKE_SHARED_LINKER_FLAGS="%{__global_ldflags} -Wl,-rpath -Wl,${MPI_LIB} -Wl,--enable-new-dtags"} \ $(test -z "${mpi}" && echo -DBUILD_MPI=OFF -DPKG_MPIIO=OFF) -DLAMMPS_TESTING_SOURCE_DIR=$PWD/tests ../cmake export LAMMPS_POTENTIALS=${LAMMPS_POTENTIALS-/usr/share/lammps/potentials} # /etc/profile.d/lammps.sh export MSI2LMP_LIBRARY=${MSI2LMP_LIBRARY-/usr/share/lammps/frc_files} # /etc/profile.d/lammps.sh