Computing : Apptainer/Singularity

Singularity to Apptainer renaming

Singularity has been renamed to Apptainer in 2022 due to legal constraints. In general, just the name has changed and all options are the same. Replacing the command `singularity` with `apptainer` should work on updated systems.


Apptainer/Singularity is available on CentOS7 (EL7) workgroup servers and execute nodes

ssh to naf-{atlas,cms,belle,...}.desy

As the standard operating system on the DESY NAF and Grid servers is (EL7), running applications depending on another Linux distributions might bring complications. With self-provided Apptainer/Singularity containers, such applicatiopns can be run in the user context in the desired distributions.

For example, one could build a Apptainer/Singularity container with Ubuntu 20.4 and run an application, that depends on Ubuntu, within the container as ordinary user. Compared to Docker as container framework, Apptainer/Singularity needs no daemon and can be execute by any user.

Quick Start Example

Es example: To start an interactive Scientific Linux 6 (SL6) container session, run on any CentOS7 machine in the NAF

apptainer shell --cleanenv --contain --bind /afs:/afs --bind /nfs:/nfs --bind /pnfs:/pnfs --bind /cvmfs:/cvmfs /cvmfs/grid.desy.de/container/sl6
singularity shell --cleanenv --contain --bind /afs:/afs --bind /nfs:/nfs --bind /pnfs:/pnfs --bind /cvmfs:/cvmfs /cvmfs/grid.desy.de/container/sl6

which will start an interactive session into the SL6 container and make local path /nfs, /pnfs, /afs, /cvmfs available in your container.

WARNING: SL6 is end of life 2020-Nov-30. Every support for SL6 ends at that date. SL6 has reached its end of life and is without support now.

Only CentOS 7 (EL7) servers are available at DESY since support for SL6 ended. So, no native way to run legacy application on SL6 exists; to run such legacy application, please evaluate, if you can run these in a SL6 Singualrity container.

Image Cache

When you pull a Docker container via singularity pull,  the files are cached by default in your home directory. Since the home is on AFS and limited in space as well since AFS is a pretty old file system, it is better to put the Singularity cache directories on another file system such as DUST.

To change these directories to something larger, create the new target directories on a suitable storage and export the following environment variables in your shell settings (.bashrc or .zshrc)

export SINGULARITY_TMPDIR=/nfs/dust/VO/user/YOURNAME/singularity/tmp

export SINGULARITY_CACHEDIR=/nfs/dust/VO/user/YOURNAME/singularity/cache

of course, the paths have to exists and be accessible.

Running Containers

Singularity containers can be 'run' in a few ways:

shell: interactive session within the container

Starting a container with shell will drop you interactively - for example

> singularity shell (–cleanenv) --contain --... /cvmfs/atlas.cern.ch/repo/containers/fs/singularity/x86_64-centos6-base

Singularity container-id :~> export ATLAS_LOCAL_ROOT_BASE="/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase"
Singularity container-id :~> etc. pp.

Normally, you will get the environment variables from your native environment in the container with the container specific variables being overwritten.

If you want to ensure, that you have a proper clean environment in the container, start it with ' --clearenv ' (See section 'Apptainer/Singularity#Environment' for more details). 

exec: run something from the container

With Apptainer/Singularity's exec command you can run a program/script living in the container in the container's environment

> singularity exec --cleanenv --contain /cvmfs/grid.desy.de/container/sl6.d/ cat /etc/redhat-release | rev

)laniF( 01.6 esaeler SOtneC

here, we run the just the container's 'cat' binary on the file '/etc/redhat-release' as it exists in the container - and we pipe the output (that end's up on our native environment's standard output) to our native program 'rev'

run: start a predefined program

If the container maintainer has configured the container with a %runscript section, the container can be run with

> singularity run --optionshere /path/to/my/container

to start a default program. For example,

sudo singularity run --bind /:/rootfs:ro /cvmfs/grid.desy.de/container/cadvisor

will start a monitor program as defined in

> /cvmfs/grid.desy.de/container/cadvisor/.singularity.d/runscript

unfortunately, you have to run it as root as it needs a view on all the system's processes, cgroups and so on (which a 'normal' container running just under your 'normal' user will not be allowed to peak into).
Note that we bound the root-fs '/' with :ro - meaning we mount it read-only - as we want to limit our container to the few things it needs (well, being able to see everything under / -- but that's luckily not writable).

btw: all the Apptainer/Singularity magic lives in the container under "~/.singularity.d/..." or for newer installations under "~/.apptainer.d/"

help

For help on a container, run

> singularity help /path/to/MyContainerName.d

(of course, the container maintainer has to have written a proper documentation in the container's recipe file under %help→ it is strongly encouraged to drop a few lines to describe what the container does and how it is to be used etc.)

Environments

Normally, your current environment variables will be available in the container (the envrionment variables also existing in the container will overwrite the ones from your host). To get a completely fresh environment, unspoiled from your current shell, start a container with '–cleanenv'.

For example, if you are running a different OS flavour container on a host OS, where your original environment paths are not present in the container environment. Of course, other environment variables will also be dropped when using --cleanenv or --containall.

How to control your container with environment variables see the section on Environment Variables.

Binding Mount Namespaces

Without explicitly bind mounting other file systems, a container instance just has a very limited view of the mount namespace (which is good practise to make only the relevant namespaces available in a container).
I.e., to have a broader view, you have to bind the paths you want. For example, to get also views on the various network file systems, do something like

> singularity shell --contain --bind /afs:/afs --bind /cvmfs:/cvmfs --bind /pnfs:/pnfs --bind /nfs:/nfs  /cvmfs/grid.desy.de/container/sl6.d

  • syntax is "--bind /source/path/on/your/host:/target/path/in/your/container" – so "--bind /cvmfs:/foo/baz" will try to mount /cvmfs from your host (if it exists...) to /foo/baz in your container (note: depending on the local Apptainer/Singularity configuration the ability might be limited/not existing to mount directories into a container, where the target path does not already exist in the container)
  • you can concatenate the binds with a comma "--bind /afs:/afs,/cvmfs:/cvmfs ..."
  • to protect mounts from getting accidentally overwriten, mount them as read-only, e.g., "--bind /afs:/afs:ro"

Tweaking Home and Work directories

With the '–home' flag you can change the ${HOME} directory in the container.
So, if you don't want to start into a container with your current home dir also as home dir in the container, switch it like

> mkdir /tmp/mytmphome
> singularity CMD --home /tmp/mytmphome ... MyContainerName

will write all the things you do in the container's HOME to /tmp/mytmphome on your host (useful, if you don't want to mess with your $HOME)

With '--no-home' your ${HOME} will not be in the container (useful if you really don't mess around with your ${HOME} )

Similar --scratch (and when --contain is switched on also '–workdir') can be used to specify, where temporary stuff from the container will be located on your host.