The Maxwell Cluster offers interactive login nodes for - undemanding - tasks, and graphical work. All interactive login nodes are shared among many users, and therefore NOT intended for any compute intense work!
There are two classes of login nodes:
- graphical nodes, so called display nodes. They are fully accessible from the outside world. Your best choice for most interactive tasks.
- non-graphical nodes, so called WGS. They can only be reached from within the DESY network, but have relaxed restrictions for computations.
Graphical (remote) Login
The display nodes are supported by several GPUs allowing for hardware accelerated graphical work (e.g. ansys, comsol, ...). You can login by
- ssh max-display3.desy.de with your favorite ssh client. Port is the default port 22, so does not require any additional configuration or tunneling.
- https://max-display3.desy.de:3389/ using your browser. That's very convenient, but has some limitations when it comes to key-shortcuts.
- Connect to max-display3.desy.de using the FastX3 client. See the FastX documentation.
All nodes are shared among many users, so please be gentle. When running multi-core or memory consuming jobs on display nodes you'll receive warnings until the jobs are terminated (forcefully if necessary).
Groups and institutions like Eu.XFEL, CSSB, Photon Science or the NOVA project have dedicated display nodes (see below for a list). Use the dedicated display nodes if possible. Rules are much more relaxed and load is usually much lower.
We recommend to use https://max-display3.desy.de:3389/ (or corresponding display node of your group) in the FastX client.
Note: eduroam occasionally blocks access to port 3389. In that case you need to use ssh-type connection.
More information: have a look at the FastX documentation
Non-Graphical Login
For more demanding short computations, like tests, compilation, there are a few workgroup server (WGS) available. From outside DESY you'll need an ssh tunnel through bastion.desy.de or max-display.desy.de. From inside DESY just
- ssh max-wgs.desy.de with your favorite ssh client.
Groups and institutions like Eu.XFEL, CSSB, CFEL, Photon Science have dedicated WGS (see below for a list). Use the dedicated WGS if possible. Rules are much more relaxed and load is usually much lower.
Overview of interactive login nodes
remote1 | fastx2 | GPU_accel3 | session_discovery4 | who can use it | remarks | |
---|---|---|---|---|---|---|
graphical login nodes (display nodes) | ||||||
max-display.desy.de | yes | yes | yes | yes | everyone with maxwell access | |
max-fs-display.desy.de | yes | yes | yes | yes | all photon science users | |
max-exfl-display.desy.de | yes | yes | yes | yes | exfel-wgs-users, exfel-theory, upex | users with active beamtime might have additional options |
max-cssb-display.desy.de | yes | yes | yes | yes | all cssb users | |
max-nova.desy.de | yes | yes | yes | yes | max-nova-users | |
non graphical login nodes (wgs) | ||||||
max-wgs | no | no | no | no | everyone with maxwell access | |
max-exfel | no | no | no | no | exfel-wgs-users, exfel-theory, upex | |
max-cfel | no | no | no | no | cfel-wgs-users | use max-display if possible, ssh to max-cfel from there |
- remote: directly accessible from outside DESY?
- fastx: fastx enabled?
- GPU acceleration: GPUs used for FastX sessions?
- session discovery: will FastX automatically reconnect to running sessions?
Policies
max-display.desy.de is primarily for graphical logins and applications requiring GPU hardware acceleration. Like regular workgroup servers, the nodes can be used
- as job-submission host
- for short computational jobs
- code compilations
- short running applications with a low cpu and memory footprint
- graphical applications requiring GPU acceleration
The nodes are NOT intended for compute intensive jobs, in particular not for computational jobs using the GPUs. The primary task of the GPUs is rendering.
The display can be accessed directly from any location, which also means that they can be attacked from any location. We will therefore
- deploy any relevant update immediately.
- reboot nodes on a regular basis.
- reboot nodes as soon as fixes to severe exploits are released.
We aim for a 24h notification before rebooting nodes, but depending on the severity of exploits nodes also might need to rebooted on very short notice. So please
- frequently save your work
- terminate login sessions not needed anymore
- don't use "screen" and things alike to monitor job progress: submit a batch-job.
We will take the liberty to kill compute intensive or long-running jobs.