The main components of the maxwell cluster different from other computational resources are
- fast dedicated storage (cluster file-systems)
- fast, low-latency network (infiniband)
- high-memory compute nodes with substantial number of GPGPUs
The pages give a very brief overview on the corresponding components and their status (where possible).
Summary of the Maxwell infrastructure
Compute Hardware | Infiniband Hardware | Storage | |||
---|---|---|---|---|---|
CPU+GPU nodes | 798 | root switches | 6 | GPFS exfel | ~40 PB |
Total number of cores with hyperthreading | 61696 | top switches | 12 | GPFS petra3 | ~20 PB |
Total number of PHYSICAL cores | 30898 | leaf switches | 42 | BeeGFS desy | 1.5 PB |
Theoretical CPU peak performance | 1074 TFlops | IB cables (#) | >1432 | BeeGFS cssb | 3.2 PB |
Total RAM | 420 TB | IB cables (length) | >7.6km | ||
GPU nodes | 180 | ||||
Total number of GPUs | 379 | ||||
Theoretical GPU peak performance | 2330 TFlops | ||||
Total peak performance | 3404 TFlops1 |
Compute Infrastructure
The pages contain an up-to-date list of the hardware for compute and login nodes in Maxwell, for the entire cluster and each of the partitions. For rules applying to individual partitions please visit the Documentation pages.
Storage Infrastructure
The pages list available storage resources in Maxwell, and some links to documentation and specs. For information what to use for which purpose, and other constraints please visit the Documentation pages.
Infiniband Infrastructure
The pages give a brief overview of switches in the fabric.
Adding resources to maxwell
It's well possible to add for example groups resources to maxwell. Please get in touch maxwell.service@desy.de for details. Keep in mind that we will need to impose certain constraints to keep the cluster as little heterogeneous as feasible.