Page tree

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 28 Next »

Software & Applications

The Maxwell-Cluster is a resources dedicated to parallel and multi-threaded application, which can use at least some of the specific characteristics. In addition to serving as a medium scale High-Performance-Cluster, Maxwell incorporates resources for Photon Science data analysis, resources of CFEL, CSSB, Petra4 and the European XFEL.

If you find the resource useful for your work, we would greatly appreciate to learn about publications, which have been substantially benefiting from the Maxwell-Cluster. Drop us a mail at maxwell.service@desy.de - or if you'd like a larger audience, feel free to send it to maxwell-user@desy.de. Acknowledgement of the maxwell-resource would also be greatly appreciated. It'll help to foster the cluster.

Search the compute space

The Maxwell-Cluster is composed of a core partition (maxwell), partitions with specific capabilities like GPUs, and groups specific partitions. The core partition is continuously growing and with currently 55 nodes, 2792 cores and 14TB of memory the largest resources. 

Like the (meanwhile decomissioned) IT-HPC cluster, Maxwell is intended for parallel computation making best use of the multi-core architectures, the infiniband low-latency network, fast storage and available memory. The cluster is hence not suited for single-core computations or embarrassingly parallel jobs like Monte-Carlo productions. Use BIRD, Grid or your groups WGS for this kind of tasks.  

The (almost) entire cluster is managed by SLURM scheduler. The SLURM scheduler essentially works on a "who comes first" basis, but using a "back-fill algorithm" (smaller jobs filling the gaps without delaying other jobs will be executed prior to scheduled big jobs). The group specific partitions however have slightly different rules: though everyone can run jobs on group specific nodes, members of the group will have a higher priority and will compete non-group jobs off the partition. See Groups and Partitions on Maxwell for details.

  • The Maxwell-Cluster is in many respects different from other resources. Please consult with the Using Maxwell page(s) for details. 
  • If you are familiar with the batch farm BIRD you might find the SLURM Rosetta table translating between different batch systems helpful. 
  • The Maxwell Hardware page provides a list of currently available nodes & configurations.
  • The Maxwell Partitions page provides a quick overview of the nodes, capacities, features and limits of the individual partitions.
  • The Maxwell Groups and Partitions page informs about the rules and setup for the various different partitions in the cluster.
  • Read the documentation! It should cover at least the essentials. If you come across incorrect or outdated information: please let us know!

 



 

maxwell-layout

Contact

For any questions, problems, suggestions please contact: maxwell.service@desy.de

Announcements will be sent via maxwell-user@desy.de. Maxwell-Users are automatically subscribed. Non-Maxwell users (e.g. CFEL-wgs users) can self-subscribe: https://lists.desy.de/sympa/info/maxwell-user

 

 

  • No labels