The Maxwell-Cluster is a resources dedicated to parallel and multi-threaded application, which can use at least some of the specific characteristics. In addition to serving as a medium scale High-Performance-Cluster, Maxwell incorporates resources for Photon Science data analysis, resources of CFEL, CSSB, Petra4 and the European XFEL.
If you find the resource useful for your work, we would greatly appreciate to learn about publications, which have been substantially benefiting from the Maxwell-Cluster. Drop us a mail at email@example.com - or if you'd like a larger audience, feel free to send it to firstname.lastname@example.org. Acknowledgement of the maxwell-resource would also be greatly appreciated. It'll help to foster the cluster.
Search the compute space
The Maxwell-Cluster is composed of a core partition (maxwell), partitions with specific capabilities like GPUs, and groups specific partitions. The core partition is continuously growing and with currently 55 nodes, 2792 cores and 14TB of memory the largest resources.
Like the IT-HPC cluster, Maxwell is intended for parallel computation making best use of the multi-core architectures, the infiniband low-latency network, fast storage and available memory. The cluster is hence not suited for single-core computations or embarrassingly parallel jobs like Monte-Carlo productions. Use BIRD, Grid or your groups WGS for this kind of tasks.
The entire cluster will be managed with SLURM. The SLURM scheduler on the core partition will essentially work on a "who comes first" basis, but using a "back-fill algorithm" (smaller jobs filling the gaps without delaying other jobs will be executed prior to scheduled big jobs). The group specific partitions however have slightly different rules: though everyone can run jobs on group specific nodes, members of the group will have a higher priority and will compete non-group jobs off the partition. See Groups and Partitions on Maxwell for details.