Page tree

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 17 Next »

Software & Applications

IT is currently setting up a new platform for high-performance and high-throughput computing. The system will be composed of dedicated and general-purpose resources. Due to larger changes required in the computing center, the common resources will become available early 2016.

For more detailed information on computing resources please consult the official IT pages.

Search the compute space

The Maxwell-Cluster will be composed of a core part as a major investment of DESY-IT planned for early 2016. The cluster will be composed of a core partition, partitions with specific capabilities like GPUs, and groups specific partitions. Partitions can an will overlap. The entire cluster will be managed with SLURM. The SLURM scheduler on the core partition will essentially work on a "who comes first" basis. The group specific partitions however have slightly different rules: though everyone can run jobs on group specific partitions, member of the group will have a higher priority and will compete non-group jobs off the partition.

Like the IT-HPC cluster, Maxwell is intended for parallel computation making best use of the multi-core architectures, the infiniband low-latency network, fast storage and available memory. The cluster is hence not suitable for single-core computations or embarrassingly parallel jobs like Monte-Carlo productions. Use BIRD, Grid or your groups WGS for this kind of tasks. 

  • The Maxwell-Cluster is in many respects different from other resources. Please consult with the "Using Maxwell" for details. 
  • If you are familiar with the batch farm BIRD you might find the Slurm Rosetta table translating between different batch systems helpful. 
  • The Maxwell Hardware page provides a list of currently available nodes & configurations. 



  • No labels