The Maxwell cluster allows to bring in group resources to become part of the cluster. The Maxwell cluster is organized in (SLURM) partitions which can be tuned to meet the individual requirements. In general, group partitions can be configured to be available for all Maxwell users, but prioritizing group members for their partitions, removing all "hostile" jobs within an adjustable period of time. Advance reservations are currently only possible for SLURM admins, but we are working on it. Let us know if you need to reserve nodes in advance!
The current partition scheme
- The Maxwell partition. This is the default partition containing. If you don't specify a partition, your jobs are scheduled for the maxwell partition.
- The EXFEL partitions for European XFEL groups and external users
- The Photon Science partition for members of FS. Use -p ps to select the photon science partition.
- The PSX partition for external Photon Science users. Use -p psx to select this partition.
- The CFEL partition consisting of the resources of the CFEL-DESY groups. Use -p cfel to select this partition.
- The CSSB partition consisting of CSSB resources. Use -p cssb to select this partition.
- The PETRA4 partition consisting of resources for Petra4 design.
- The XFEL partitions for operation and simulation of the XFEL accelerator at DESY.
- The CMS partitions.
- The ALL partition consisting of almost all nodes from the all partitions. Use -p all to select this partition. Please note that preemption-rules apply!
There are more partitions in maxwell. Have a look at the partition overview also for the job limits applying!
Check out the hardware pages for details of all nodes available as well as for individual partitions.
Login Nodes / Job-Submission Hosts
For the various partitions and hardware platforms different login nodes are available, mostly for compilation and tests of applications requiring specific hardware. Job-Submission can always be done from any of the login nodes, regardless of the partition or hardware requested. A quick overview over available login nodes:
Login name | Partition | Access | Scope | URL | Comments |
---|---|---|---|---|---|
max-display | NONE | any maxwell user | generic graphical login nodes | https://max-display.desy.de:3443/ | Accessible from outside |
max-wgs | NONE | any maxwell user | generic login nodes | remote via bastion | |
max-wgsa | NONE | any maxwell user | generic login nodes | will be decommissioned | |
European XFEL | |||||
max-exfl | EXFEL, UPEX | EXFEL members & users | interactive CPU node for EXFEL | remote via bastion | |
max-exfl-display | EXFEL, UPEX | EXFEL members & users | graphical login node | coming soon | |
Photon Science | |||||
max-fsc | PS | FS members | interactive CPU node | https://max-fsc.desy.de:3443/ | remote via bastion |
max-fsg | PS | FS members | interactive GPU node | https://max-fsg.desy.de:3443/ | remote via bastion |
desy-ps-cpu | PSX | External Photon Science users | interactive CPU node | https://desy-ps-cpu.desy.de:3443/ | remote via desy-ps-ext |
desy-ps-gpu | PSX | External Photon Science users | interactive GPU node | https://desy-ps-gpu.desy.de:3443/ | remote via desy-ps-ext |
max-nova | NONE | nova users only | graphical login nodes for the NOVA project | https://max-nova.desy.de:3443/ | Accessible from outside |
CFEL | |||||
max-cfel | CFEL | CFEL DESY users | interactive CPU nodes for CFEL | remote via bastion | |
max-cfelg | CFEL | CFEL DESY users | interactive GPU nodes for CFEL | remote via bastion | |
CSSB | |||||
max-cssb-display | CSSB | ALL CSSB maxwell users | graphical login nodes for CSSB | https://max-cssb-display.desy.de:3443/ | tbd |
max-cssba | CSSB, UKE | Members of the UKE group in CSSB | interactive CPU node for UKE@CSSB | remote via bastion |
A list of all nodes with some hardware specs can be found on the hardware page.
Resources and Partitions
If you are uncertain which partitons you are allowed, you can check it using simple scriplets:
Viewing the partition scheme
Simply ssh to any of the login nodes and run sview to get an overview of available partitions, nodes and jobs. sview will per default only display the partitions you are entitled to use (select "Hidden" under Options if you want to see all of them!)! The sinfo command gives a more detailed and more customizable view on the partitions and limits. See the sinfo man page for details.