- This line was added.
- This line was removed.
- Formatting was changed.
|title||Software & Applications|
The Maxwell-Cluster is aresources
resource dedicated to parallel and multi-threaded application, which can use at least some of the specific characteristics. In addition to serving as a medium scale High-Performance-Cluster, Maxwell incorporates resources for Photon Science data analysis, resources of CFEL, CSSB, Petra4and
, the European XFEL.If you find the resource useful for your work, we would greatly appreciate to learn about publications, which have been substantially benefiting from the Maxwell-Cluster
.Drop us a mail at maxwell
.firstname.lastname@example.org - or if you'd like a larger audience, feel free to send it to email@example.com. Acknowledgement of the maxwell-resource would also be greatly appreciated. It'll help to foster the cluster.
Search the compute space
The Maxwell-Cluster is composed of a core partition (maxwell) , partitions with specific capabilities like GPUs, and groups and group specific partitions. The core partition is continuously growing and with currently 55 nodes, 2792 cores and 14TB of memory the largest resources. Like the (meanwhile decomissioned) IT-HPC cluster, Maxwell is All compute nodes are however available for everyone!
The Maxwell-Cluster is primarily intended for parallel computation making best use of the multi-core architectures, the infiniband low-latency network, fast storage and available memory. The cluster is hence not suited for single-core computations or embarrassingly parallel jobs like Monte-Carlo productions. Use BIRD, Grid or your groups workgroup server (WGS) for this kind of tasks.
The (almost) entire cluster is managed by SLURM scheduler (with some notable exceptions). The SLURM scheduler essentially works on a "who comes first" basis, but using a "back-fill algorithm" (smaller jobs filling the gaps without delaying other jobs will be executed prior to scheduled big jobs). The group specific partitions however have slightly different rules: though everyone can run jobs on group specific nodes, members of the group will have a higher priority and will compete non-group jobs off the partition. See Groups and Partitions on Maxwell for details.
- The Maxwell-Cluster is in many respects different from other resources. Please consult with the Using Maxwell page(s) for details.
- If you are familiar with the batch farm BIRD you might find the SLURM Rosetta table translating between different batch systems helpful.
- To get started, please have a look at the Getting Started page!
- The Maxwell Hardware page provides a list of currently available nodes & configurations.
- The Maxwell Partitions page provides a quick overview of the nodes, capacities, features and limits of the individual partitions.
- The Maxwell Groups and Partitions page informs about the rules and setup for the various different partitions in the cluster.
Read the documentation! It should cover at least the essentials. If you come across incorrect or outdated information: please let us know!
If you find the resource useful for your work, we would greatly appreciate to learn about publications, which have been substantially benefiting from the Maxwell-Cluster. Drop us a mail at firstname.lastname@example.org. Acknowledgement of the maxwell-resource would also be greatly appreciated. It'll help to foster the cluster, for example: "This research was supported in part through the Maxwell computational resources operated at Deutsches Elektronen-Synchrotron DESY, Hamburg, Germany"
For any questions, problems, suggestions please contact our ticket system: email@example.com
All Announcements will be sent via firstname.lastname@example.org.
We strongly recommend that all maxwell-users
subscribe to the mailing list. Only user of the maxwell partition, are automatically added.
We offer an forum where maxwell user can help other maxwell user. It's by no means a replacement for the regular support channels and helpdesk.
Search the compute space