Maxwell : Container

Singularity (apptainer)

Singularity allows to run docker or singularity in a user context. It's in particular very handy for MPI jobs or when in need of a GPU. Please consult the singularity documentation for further information.

singularity is installed on all maxwell nodes and can also be used on interactive login nodes. Be aware that singularity unpacks images in /tmp per default, and /tmp is very small. Please redirect the temporary singularity space to some other location:

export SINGULARITY_TMPDIR=/beegfs/desy/user/$USER/stmp
export SINGULARITY_CACHEDIR=/beegfs/desy/user/$USER/scache
mk-beegfs # if you don't have space in /beegfs
mkdir -p $SINGULARITY_TMPDIR $SINGULARITY_CACHEDIR

singularity run --nv --bind /beegfs:/beegfs docker://tollerort.desy.de/maxsoft/vitis-ai-tensorflow2-gpu:3.0.0

# --nv:                       support nvidia GPUs
# --bind: /beegfs:/beegfs     mount /beegfs inside the container. You might want to do the same for example with /asap3

Running EL7 container on Maxwell

export SINGULARITY_TMPDIR=/beegfs/desy/user/$USER/stmp
export SINGULARITY_CACHEDIR=/beegfs/desy/user/$USER/scache
mk-beegfs # if you don't have space in /beegfs
mkdir -p $SINGULARITY_TMPDIR $SINGULARITY_CACHEDIR

export SiMo=‘-H $HOME/EL7 -B /beegfs/desy:/beegfs/desy -B /gpfs/maxwell/software:/software‘
singularity shell $SiMo shub://billspat/CentOS7-Singularity:latest

# -H: home-dir inside the container. Note: won't change your working directory

Running EL9 container on Maxwell

export SINGULARITY_TMPDIR=/beegfs/desy/user/$USER/stmp
export SINGULARITY_CACHEDIR=/beegfs/desy/user/$USER/scache
mk-beegfs # if you don't have space in /beegfs

mkdir -p $SINGULARITY_TMPDIR $SINGULARITY_CACHEDIR 

export SiMo=‘-H $HOME/EL9 -B /gpfs/maxwell/software/el9:/software‘
singularity shell $SiMo library://library/default/rockylinux:9 

Using SLURM inside container

To be able using SLURM inside a container, the image must come with a matching SLURM installation, and slurm config as well as the munge socket need to be mounted. To do so download SLURM and copy the tarball into the image during build. Make sure munge and build-tools (compiler) are installed. A simple slurm-compile would be sufficient:

tar xf /slurm/slurm-23.11.1.tar.bz2
cd slurm-23.11.1
./configure --prefix=/usr/local --with-munge
make -j 4 && make install

To use slurm, you need a slurm-conf and munge-socket. For example:

export SINGULARITY_TMPDIR=/beegfs/desy/user/$USER/stmp
export SINGULARITY_CACHEDIR=/beegfs/desy/user/$USER/scache
mk-beegfs # if you don't have space in /beegfs
mkdir -p $SINGULARITY_TMPDIR $SINGULARITY_CACHEDIR

export SLURMENV="--env SLURM_CONF=/etc/slurm/slurm.conf -B /etc/slurm:/etc/slurm -B /run/munge:/run/munge -B /etc/passwd:/etc/passwd"

singularity run $SLURMENV --nv --bind /beegfs:/beegfs docker://tollerort.desy.de/maxsoft/vitis-ai-tensorflow2-gpu:3.0.0

# /etc/slurm adds the slurm-config
# /run/munge adds the munge socket
# /etc/passwd is needed so slurm is a know user inside the container