Page tree

A JupyterHub provides a multi-user environment to run notebooks slightly more conveniently. If you want to know more about JupyterHubs we'd recommend the official documentation: https://jupyter.org/hub.

The JupyterHub on the maxwell cluster and can be reached at https://max-jhub.desy.de and is available from outside the DESY Network. You will be presented a login mask:

For login use your DESY credentials. This only works if you also have access to the Maxwell compute cluster. If the login doesn't work, you probably  are not allowed to use the cluster. You can verify that by running the command "my-resources" for example on pal.desy.de. Using the Hub is not good enough a reason to grant you the access to the Maxwell cluster. In that case you can still use your single-user server and execute any notebook of your choice.

You could for example configure a notebook-server using a scriplet like this:


configure.local-jupyter.sh
#!/bin/bash
source /etc/profile.d/modules.sh
module load anaconda/3

if [[ -f $HOME/.jupyter/jupyter_notebook_config.py ]]; then
  echo "Config exists: $HOME/.jupyter/jupyter_notebook_config.py"
  echo "Remove first to generate a config"
else

jupyter notebook --generate-config
mypass=$(python -c 'from notebook.auth import passwd; x=passwd(); print(x)') 
myid=$(id -u)  # using your id as the port of the server makes it kind of unique

cat<<EOF >> $HOME/.jupyter/jupyter_notebook_config.py
c.NotebookApp.ip = '*'
c.NotebookApp.open_browser = False
c.NotebookApp.password = '$mypass'
c.NotebookApp.port = $myid
c.NotebookApp.port_retries = 50

EOF
fi

Let's assume that you successfully logged in. You will see a spawner field which allows you to choose between a couple of options. Essentially the spawner launches a slurm-job on the cluster with the options shown translated into slurm job-options.

The current status table show the currently available resources for all partitions you are entitle to use. The default partition is the shared jhub partition. It runs up to 40 concurrent sessions per compute node, so can currently host up to 120 notebook server at the same time. The maximum duration of a session on the jhub partition is always 7 days. On all other partitions the time is limited to 8 hours max, simply to avoid that idle or forgotten notebooks consume too many resources.

You can for example choose between different types of GPU, either P100 or V100 or any. Any can actually be one of P100, V100, K20X, K40X or whatever else is available on the cluster. If you have chosen a partition without any GPUs (for example the jhub, maxwell and all partition) the request for a GPU will simply be ignored. If you have selected a V100 GPU but there is currently none available, your server request will pend up to 15 minutes. So it's always advisable to check in the "Current Status" table the availability of resources. Alternatively you can also login to one of the Maxwell login-nodes, and check the status using slurm commands sinfo, squeue or sview.

Once you made your choice launch the server using the "Spawn" button at the bottom of the page. You should see the running notebook server within a few seconds. You can verify the status of your job for example on max-wgs.desy.de using 'sinfo -u $USER -a'.

Once started you can choose between a number of site-wide installed kernels. Generally the Python3 kernel (the default kernel) would be your primary choice. The list of kernels can be expanded by user installed kernels. 

The development version of the Maxwell Jupyterhub offers additional options working with remote notebooks or repositories.

  • No labels