Page tree

Docker Installation

If you want to use docker,  install it via the software store "packages":

  1. After opening the  program you will see a search bar at the top.  Type "docker.io" in the search bar and start the search by pressing Enter.Apply changes buttonDocker package in the software store
  2. Look for the package"Linux container runtime docker.io-...postinstall.auth.plugin". Select the package and click install. The install button is located near the bottom right corner.
  3. A green button with the text "Apply changes" close to the top right corner should appear.

  4. Click this button to finally start installing docker.

Verifying the docker group membership

Usually it is not possible to run docker images without sudo permissions. Due to this the logged in user automatically becomes a member of the local docker group on the computer to allow running docker commands. This currently does not work with the gnome-terminal (for more information about this click here). Therefore when using docker, please choose between other terminals like Terminator, XTerm or Tilda.

After starting one of these terminals test with the command id whether your user is a member of the group docker. It should look similar to this output:

 "id" output

~> id
uid=54321(your_username) gid=12345(your_group) groups=12345(your_group), ..., 132(docker), ...

If you do not have an entry for the docker group, please log out and log in again. After this, try the id command again. Now you should be a member of the docker group.

Please note: This only works if docker.io is installed on your system, because the local docker group is not created until the package is installed.

Testing Docker

After installing Docker you can test if docker works by issuing the following command:

docker run hello-world

Here you can see the successful output:

Successful output
~> docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
1b930d010525: Already exists 
Digest: sha256:2557e3c07ed1e38f26e389462d03ed943586f744621577a99efb77324b0fe535
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/get-started/

Authorization Plugin

If you use Docker with some options you will get an error like "authorization denied by plugin openpolicyagent/opa-docker-authz-v2:0.4: request rejected by administrative policy".
The following options are blocked by the plugin to ensure a high isolation between the host and the containers:

  • --userns
  • --privileged
  • --net=host
  • -p <port below 1024>:<any port is allowed here>
  • --security-opt only with the values:  null, "label=type:container_runtime_t", "no-new-privileges"
  • disabling, removing plugins

Please omit these options or only specify the allowed values (e.g. a host port above 1024) in order to work with Docker containers.

Running GUI applications in Docker

If you want to run a GUI application via Docker it is necessary to create a connection to the X server of the host. This can be achieved by using SSH with X11 forwarding, VNC or sharing the X11 socket of the host system with the container. The following examples show how to create a docker image containing GIMP by describing the install instructions in a Dockerfile, building the image and finally connecting to the GUI app.

1. Creating a docker image

In order to generate a docker image you have to define a Dockerfile. The Dockerfile is a text document consisting of a series of instructions on how to build the docker image. The file recognizes several commands like FROM, CMD, VOLUME, ENV, ENTRYPOINT, LABEL, EXPOSE, COPY and more. Basically, when building the docker image the Dockerfile defines the instructions which will run serially to assemble the desired image.

1.1 Creating a working directory

First of all, a working directory needs to be created:

~> mkdir docker-image
~> cd docker-image/
~/docker-image>

1.2 Creating the Dockerfile

After creating the working directory, you can define the Dockerfile. Below are multiple examples to illustrate more accurately the different ways of the installation.

Install via package manager
~/docker-image> vim Dockerfile
FROM ubuntu
RUN apt-get update && \
    apt-get install -y --no-install-recommends gimp && \
    apt-get clean && rm -rf /var/lib/apt/lists/* /usr/share/man/* /tmp/* /var/tmp/*
CMD ["gimp"]
Install via Personal Package Archive (PPA)
FROM ubuntu
RUN apt-get update && \
    apt-get install -y --no-install-recommends software-properties-common && \
    add-apt-repository ppa:otto-kesselgulasch/gimp && \ 
    apt-get update && \
    apt-get install -y --no-install-recommends gimp 
CMD ["gimp"]

 Install from source code
Install from source code
# Naming the stage with AS
FROM ubuntu AS builder
# Installing every package which is required to compile GIMP
RUN set -x && \
    apt-get update && \
# --no-install-recommends prevents installing unneeded packages
    apt-get install -y --no-install-recommends \
    autoconf \
    automake \
    cmake \
    dh-autoreconf \
    fontconfig \
    gcc \
    g++ \
    gimp-data \
    gimp-help-common \
    gimp-help-en \
    glib-networking \
    intltool \
    libbz2-dev \
    libgexiv2-dev \
    libgimp2.0 \
    libglib2.0-dev \
    libgtk-3-dev \
    libgtk2.0-dev \
    libjson-glib-dev \
    liblcms2-2 \
    liblcms2-dev \
    libmypaint-dev \
    libpango-1.0.0 \
    libpng-dev \
    libpoppler-glib-dev \
    librsvg2-dev \
    libtiff-dev \
    make \
    mypaint-brushes \
    nasm \
    pkg-config \
    pkg-config \
    poppler-data \
    wget \
    yasm
# apt-get clean etc. is done later
WORKDIR /tmp/
RUN wget https://download.gimp.org/mirror/pub/gimp/v2.10/gimp-2.10.0-RC1.tar.bz2 https://download.gimp.org/mirror/pub/gimp/v2.10/SHA512SUMS \
    https://download.gimp.org/pub/babl/0.1/babl-0.1.64.tar.bz2 https://download.gimp.org/pub/babl/0.1/SHA256SUMS \
    https://download.gimp.org/pub/gegl/0.4/gegl-0.4.16.tar.bz2 https://download.gimp.org/pub/gegl/0.4/SHA256SUMS \
    https://downloads.sourceforge.net/libjpeg-turbo/libjpeg-turbo-2.0.2.tar.gz
# Compare the checksums
RUN cd /tmp; sha512sum gimp-2.10.0-RC1.tar.bz2 | sha512sum -c - && sha256sum babl-0.1.64.tar.bz2 | sha256sum -c - && \
             sha256sum gegl-0.4.16.tar.bz2 | sha256sum -c - && \
# gegl requires babl and libjpeg-turbo in a more recent version -> needs to be compiled from source, too
    tar xvjf babl-0.1.64.tar.bz2; cd babl-0.1.64; ./autogen.sh; make -j4; make install -j4; \
    cd /tmp; tar xvzf libjpeg-turbo-2.0.2.tar.gz; cd libjpeg-turbo-2.0.2; cmake -G"Unix Makefiles"; make install -j4; \
    cd /tmp; tar xvjf gegl-0.4.16.tar.bz2; cd gegl-0.4.16; ./autogen.sh; make -j4; make install -j4; \
# libgegl-dev is needed to compile gimp, but if installed before compiling gegl-0.4 it would result in an error
    apt-get install -y --no-install-recommends libgegl-dev; \
    cd /tmp; tar xvjf gimp-2.10.0-RC1.tar.bz2; cd gimp-2.10.0-RC1; \
# install to custom directory by setting --prefix
    ./configure --prefix=/root/gimp --disable-python; make -j4; make install -j4; \
# weird gimp error if libgegl-dev is not reinstalled: gimp cannot find gegl 0.3 although it is installed..
    apt-get remove -y gir1.2-gegl-0.3 libgegl-0.3-0 libgegl-dev; apt-get install -y --no-install-recommends libgegl-dev \
# reducing image size by clearing cache/tmp
    && apt-get clean \
    && rm -rf /var/lib/apt/lists/* /usr/share/man/* /tmp/* /var/tmp/*

FROM ubuntu
WORKDIR /usr/local/bin/
# only install necessary packages to run GIMP (optimally you would compile the program statically -> every dependency is included)
RUN apt-get update && \
    apt-get install -y --no-install-recommends \
    libgegl-dev \
    libgexiv2-2 \
    libgtk2.0-dev \
    libmypaint-1.3-0 \
    libpoppler-glib-dev \
    poppler-data \
    && apt-get clean \
    && rm -rf /var/lib/apt/lists/* /usr/share/man/* /tmp/* /var/tmp/*
COPY --from=builder /root/gimp /root/gimp
ENTRYPOINT ["/root/gimp/bin/gimp"]
CMD ["--new-instance"]

When your goal is to compile the program from source code it is recommended to use multistage builds as shown in this example: In the first stage a container is created with every package needed to compile GIMP. At the end of the FROM instruction the stage is named builder which allows refering to this stage more easily in the Dockerfile.
The tar archives can be downloaded via RUN wget/curl or the ADD instruction. When choosing the latter option the layer will not be cached, i.e. ADD always downloads the archive from the remote URL while RUN will use the layer cache if the wget/curl command does not change. Due to this RUN with wget/curl will very often result in faster build times. Using RUN and wget/curl is strongly recommended by Docker to reduce image size, too.
The compiling steps are dependent on the respective program you want to install. For example, libjpeg-turbo requires cmake, gegl provides the autogen-script which invokes configure internally and GIMP is installed by issuing ./configure; make; make install.

Theoretically, lines 67-80 could be commented out and the container would work. So, why is there the second stage? It is primarily used to reduce the image size. The second FROM instruction marks the beginning of the second stage. Only the required packages to run GIMP are installed in this stage. COPY allows copying the compiled binary from the first stage to the second stage. Due to this, GIMP and its dependencies will be installed in the final container, but no packages which were only needed to compile it. Ideally, the program is compiled statically with all dependencies. Then you can copy the program to the second stage without installing additional packages and the final image size will be very small.


The first instruction in the Dockerfile must specify a base image with the command FROM. In this example Ubuntu is used as the base image, but you can use any working image as a base image. By declaring the base image there already is a base layer and the instructions after the FROM command in the Dockerfile will be executed on top of this layer.

To run GIMP it needs to be installed beforehand. This is achieved in the second line of the Dockerfile. The RUN instruction executes the given command and creates a new layer which contains the changes made to the base image.
So, first of all the container receives updates and then installs GIMP.

Tips: When working with apt-get, please consider whether the command is interactive. If this is the case, the option -y can be utilized to automatically answer with yes to prompts. To reduce the size of the container image install GIMP with the --no-install-recommends option, issue apt-get clean as the last apt-get command and clear the contents of various caching/temp directories. By following these steps the file size of GIMP's container image could be reduced by 130 MB.

Finally, the CMD instruction starts GIMP. An alternative to the CMD instruction is ENTRYPOINT which allows passing arguments to the application in the container when starting the container later on.

Consider this Dockerfile:

FROM ubuntu
RUN apt-get update && \
    apt-get install -y --no-install-recommends gimp && \
    apt-get clean && rm -rf /var/lib/apt/lists/* /usr/share/man/* /tmp/* /var/tmp/*
ENTRYPOINT ["gimp"]
CMD ["--new-instance"]

The GIMP command is defined as the ENTRYPOINT. Due to this you can pass arguments to GIMP if you run the container. When no argument is given, the CMD as stated in line 4 will be executed. Here are some examples:

~> docker run -e "DISPLAY=$DISPLAY" -v /tmp/.X11-unix/:/tmp/.X11-unix/ -v /home/<your_username>/:/home/<your_username>/ gimp # GIMP starts a new instance

~> docker run -e "DISPLAY=$DISPLAY" -v /tmp/.X11-unix/:/tmp/.X11-unix/ -v /home/<your_username>/:/home/<your_username>/ gimp --help
Usage:
  gimp [OPTION?] [FILE|URI...]

GNU Image Manipulation Program

Help Options:
  -h, --help                          Show help options
...

~> docker run -e "DISPLAY=$DISPLAY" -v /tmp/.X11-unix/:/tmp/.X11-unix/ -v /home/<your_username>/:/home/<your_username>/ gimp /home/<your_username>/Pictures/GIMP-example.png
# GIMP opens the picture. Note: The path needs to be the valid inside the container.

(If you make changes to the Dockerfile and you want to test them, you need to rebuild the image. This is shown in 1.3.)

1.3 Building the image

Now we can build the docker image:

~/docker-image> docker build -t gimp .
Sending build context to Docker daemon  2.048kB
Step 1/3 : FROM ubuntu
...

The command to create the docker image is docker build. It requires the path to the Dockerfile and you can tag the image with the -t option. After issuing the build command every step defined in the Dockerfile will be executed sequentially and  the image will be assembled.

You can list the docker images with the following command:

~/docker-image> docker image ls
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
gimp                latest              ce97114b8689        28 seconds ago      508MB
...

2. Using the container

Docker images can be started via the docker run command.

2.1 X forward via x11docker

This method has some advantages in comparison to the others: You do not have to modify the Dockerfile and it is more secure because it creates a second X server specifically for the Docker containers to avoid X security leaks.

Installation steps:

  1. In your home directory clone the git repository: git clone https://github.com/mviereck/x11docker.git
  2. cd x11docker
  3. sed -i 's/Createcontaineruser="yes"/Createcontaineruser="no"/g' x11docker
  4. Make sure that nxagent or xephyr (xserver-xephyr) is installed. Both packages are available in the software store "packages". nxagent seamlessly displays the application while xephyr creates a "container" window surronding the GUI app. nxagent might not work with every program though.

Now you can start the GUI application by executing x11docker as a wrapper script: ~/x11docker/x11docker <docker_image>, e.g. ~/x11docker/x11docker gimp to start GIMP. If you want to access your home directory to edit and save some pictures you can mount the home directory with ~/x11docker/x11docker <docker_image> --share /home/<your_username>/. In the container the mounted path will be the same.

2.2 X forward via SSH

In order to see the GUI application you can use X forwarding via SSH. The Dockefile needs to be customized to allow SSH connections:

#Dockerfile without ssh:
FROM ubuntu
RUN set -x && \
apt-get update && \
apt-get install -y --no-install-recommends gimp && \
apt-get clean && rm -rf /var/lib/apt/lists/* /usr/share/man/* /tmp/* /var/tmp/*
ENTRYPOINT ["gimp"]
CMD ["--new-instance"]

#Updated Dockerfile:
FROM ubuntu
RUN set -x &&\
apt-get update && \
apt-get install -y --no-install-recommends \
gimp \
openssh-server \
xauth && \
apt-get clean && rm -rf /var/lib/apt/lists/* /usr/share/man/* /tmp/* /var/tmp/*
RUN mkdir /var/run/sshd \
&& mkdir /root/.ssh \
&& chmod 700 /root/.ssh \
&& ssh-keygen -A \
&& sed -i "s/^.*PasswordAuthentication.*$/PasswordAuthentication no/" /etc/ssh/sshd_config \
&& sed -i "s/^.*X11Forwarding.*$/X11Forwarding yes/" /etc/ssh/sshd_config \
&& sed -i "s/^.*X11UseLocalhost.*$/X11UseLocalhost no/" /etc/ssh/sshd_config \
&& grep "^X11UseLocalhost" /etc/ssh/sshd_config || echo "X11UseLocalhost no" >> /etc/ssh/sshd_config
RUN echo "INSERT YOUR PUBLIC SSH KEY HERE" >> /root/.ssh/authorized_keys
ENTRYPOINT ["sh", "-c", "/usr/sbin/sshd && tail -f /dev/null"]

When you update the Dockerfile, make sure to insert your public ssh key between the quotation marks in the second last line. After rebuilding the image with the revised Dockerfile, you can start the GUI application by following these steps:

  1. Start the Docker container by executing docker run --name gimp -d gimp
  2. Get the IP address of the container: docker inspect gimp | grep IPAddress
  3. Connect to the container: ssh -X root@<the IP address of the container (in the output of the last command)>
  4. Start the program. In this example you start GIMP by entering gimp.

If you want to stop the container, you can simply execute docker stop gimp. You can start it again with docker start gimp.

Note: You maybe need to delete the known_hosts entry for the container when deleting or creating new containers. The command to remove the host entry should be displayed in the warning message.

2.3 X forward by mounting the X.11 socket

Warning: Only use this method with containers you trust! There are ways to use the X.11 socket to create keyloggers or to insert key/mouse input into the host.

In this example you must set the environment variable of the display accordingly and share the X.11 socket with the container in order to be able to view the GUI application. The following command will work:

~/docker-image> docker run -e "DISPLAY=$DISPLAY" -v /tmp/.X11-unix/:/tmp/.X11-unix/ gimp
Gtk-Message: 13:05:52.508:...

2.4 General tips

You can retrieve a list of all running containers by issuing the docker container ls command:

~> docker container ls
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
165e7889fc19        gimp                "gimp"              9 seconds ago       Up 9 seconds                            gallant_wright


If you want to access some of your pictures in your home directory or save the project, you can add another bind mount:

~/docker-image> docker run -e "DISPLAY=$DISPLAY" -v /tmp/.X11-unix/:/tmp/.X11-unix/ -v /home/<your_username>/:/some/mount/point/in/container gimp

In GIMP you should be able to access the specified path (/some/mount/point/in/container) and it should contain the files stored under your home directory. It should not be possible to write or read files which you cannot access anyway.

After some time you can have a lot of unused containers or images. These images or containers can be removed with the docker image prune and docker container prune commands:

~> docker image prune
WARNING! This will remove all dangling images.
Are you sure you want to continue? [y/N] y
...
deleted: sha256:542a7d6155ef1fdd077614e0de5eb4f2c293b463f16a9d2096458b9ddf76eea1

Total reclaimed space: 5.356GB

~> docker container prune
WARNING! This will remove all stopped containers.
Are you sure you want to continue? [y/N] y
Deleted Containers:
...
7a8b13538a5ab72ab1afb9ca99b2ed3d817224414ebd93d93c8a22c29d4c34e8

Total reclaimed space: 438.3kB

Running computing-intensive containers

When you want to run something like Machine Learning Code or Tensorflow applications, it is highly recommended that you use resources other than your green desktop, such as the National Analysis Facility (NAF):
GPU on NAF, Singularity

Running multi-container setups

Larger applications often consist of several docker containers that perform different tasks. For example, a container can provide a database that is used by another container to store and read the data.

To simplify the interaction of several docker containers, it is recommended to use docker-compose. With this tool, the services can be represented in a YAML file and then started with a single command. For a beginners tutorial you can check out the official docker documentation: Get started with Docker Compose

In the example Redis is used to store the page hits. But how can the web container request the information? In the python script "redis" is used as the hostname when defining the connection. This is a network alias which can be resolved inside the docker network bridge. You can find out the aliases of the containers using the following command:

docker inspect -f '{{range .NetworkSettings.Networks}}{{.Aliases}}{{end}}' <container id>

Communication between containers without --network=host

If you want to replicate a multi-container setup, it may happen more often that all ports used for the application are published to the host via the -p, --publish option. Then the main container is started with the option --network=host to gain access to all published ports, as these are easily accessible via localhost:<port>. However, since the --network=host option is blocked by the authorization plugin, this does not work on green desktops.

To solve this, you need to use network aliases or IP addresses in order to remove the localhost:<port> definitions. A simple example is a nginx container which is started via

docker run -d -p 8080:80 nginx

If you now issue curl localhost:8080 you should see the HTML structure of the default nginx page. But if you do this inside a container localhost points to the container itself and not the host. Due to this it will not work:

docker run -it alpine
apk add curl
...
curl localhost:8080
curl: (7) Failed to connect to localhost port 8080: Connection refused

But if you run curl with the ip address and the port of nginx it will work:

curl 172.17.0.2:80
<!DOCTYPE html>
<html>
....

So, every reference to localhost must be replaced by the IP address (or network alias) of the correct container and the port should correspond to the one used in the referenced container and not the published one. However, the IP address of the container can change if another container is already using this IP address. This issue can be prevented by assigning static IP addresses or network aliases. For this it is necessary to create a user defined network:

docker network create --subnet <ip> --gateway <ip> <name>

docker network create --subnet 172.22.0.0/16 --gateway 172.22.0.1 custom

Now one is able to set network-aliases and ip addresses when using docker run. You just need to set the custom network, too:

docker run -d --network custom --network-alias nginx --ip 172.22.0.2 nginx

After starting the alpine cintainer via docker run -it --network custom alpine these curl commands should be successful:

curl nginx
curl 172.22.0.2


More information: Dockerfile reference, Dockerfile best practices

  • No labels