Maxwell : Infrastructure

The main components of the maxwell cluster different from other computational resources are

  • fast dedicated storage (cluster file-systems)
  • fast, low-latency network (infiniband)
  • high-memory compute nodes with substantial number of GPGPUs

The pages give a very brief overview on the corresponding components and their status (where possible).

Summary of the Maxwell infrastructure

Compute HardwareInfiniband HardwareStorage
CPU+GPU nodes798root switches6GPFS exfel~40 PB
Total number of cores with hyperthreading61696top switches12GPFS petra3~20 PB
Total number of PHYSICAL cores30898leaf switches42BeeGFS desy1.5 PB
Theoretical CPU peak performance1074 TFlopsIB cables (#)
>1432BeeGFS cssb3.2 PB
Total RAM420 TBIB cables (length)>7.6km

GPU nodes180



Total number of GPUs379



Theoretical GPU peak performance2330 TFlops



Total peak performance 3404 TFlops1




Compute Infrastructure

The pages contain an up-to-date list of the hardware for compute and login nodes in Maxwell, for the entire cluster and each of the partitions. For rules applying to individual partitions please visit the Documentation pages.

Storage Infrastructure

The pages list available storage resources in Maxwell, and some links to documentation and specs. For information what to use for which purpose, and other constraints please visit the Documentation pages.

Infiniband Infrastructure

The pages give a brief overview of switches in the fabric.

Adding resources to maxwell

It's well possible to add for example groups resources to maxwell. Please get in touch maxwell.service@desy.de for details. Keep in mind that we will need to impose certain constraints to keep the cluster as little heterogeneous as feasible.