Page tree

General resources

The Maxwell cluster is composed of a Maxwell partition and resources which are contributed by various groups on the DESY campus.

The Maxwell partition is available for everyone on the DESY campus as well as external users of the Photon Science Facilities - under one condition: your application is suitable for high-performance computing. That could include

  • multi-core MPI applications which can efficiently use most of the cores on a single or multiple nodes
  • many-core applications requiring a very low-latency high-bandwidth network
  • application with very high memory consumption
  • photon science applications or simulations in need of GPU acceleration

Embarrassingly parallel applications like Monte-Carlo simulations are usually poor examples to utilize the resources effectively. We would advise to use the BIRD cluster instead.

If you think that the Maxwell cluster might help with your computational problems and your application is suited: just drop a message to asking for the resource, but please explain very briefly what kind of applications you intend to run.

Please see: Maxwell for everyone for further information

Group resources

If that's not the case you might still be entitled to use dedicated parts of the Maxwell cluster:

Your group is not among the ones listed? You still could contribute your resources to the Maxwell cluster for the benefit of everyone on campus. Check the "Bringing resources to maxwell" pages for options.

Very first step

You first want to verify which resources you can already use, or whom to contact in case your missing a resource. There are various ways to do that, depending on the account type available software (e.g. FastX).

What always works:  open a terminal (e.g. putty), ssh to ( for external photon science users) and run a small scriplet called my-resources:

[@bastion ~]$ my-resources

       Resource    Access   URL                             Comments                                                     
General Resources      
      desycloud      yes     
            afs      yes   /afs/         
         oracle      yes            

Compute Resources      
           bird       no             ask for the batch resource                        
        maxwell       no          ask for the maxwell resource          
  exfel@maxwell       no          please contact                             
   upex@maxwell       no          please contact                             
   cfel@maxwell       no          please contact CFEL DESY admins                               
     ps@maxwell       no          please contact                                  
    psx@maxwell       no          please contact                                  
   cssb@maxwell       no          please contact                             
 petra4@maxwell       no          please contact MPY admins                                     
   xfel@maxwell       no          please contact MPY admins                                     

Atlassian Tools        
     confluence      yes    
           jira       no            ask your group admins for jira access                         
          stash       no            ask your group admins for stash access                        
         bamboo       no           ask your group admins for bamboo access                       
Administrators: check

Home Directories

The HOME-Directory on Maxwell is /home/$USER. /home is mounted on a cluster file system (GPFS). More important:

/home has a single daily snapshot. It's not in backup and will not be archived. The snapshots are located in /home/.snapshot!

/home has a hard quota of 20GB!

Make sure to transfer important data to suitable resources (e.g. group specific storage).

Don't use it for any data crucial for your group! Once your account expires the data will be removed and will not be recoverable!

Storage and Scratch

Everyone with access to the Maxwell cluster also has access to BeeGFS storage space. To create your BeeGFS directory under /beegfs/desy/user/$USER just invoke the command mk-beegfs on one of the maxwell nodes. For more information on BeeGFS and other storage elements available please have a look at Storage on Maxwell.

Kerberos & AFS 

Slurm jobs will NOT support AFS token or a Kerberos ticket!  It means jobs will not suffer from expiring tokens or tickets. It also means that you can't rely on their existence. If your jobs need access to AFS directories it might be favorable to set ACLs enabling token-free access - if possible.  

Stay informed

Announcements about updates, maintenance and so on will be communicated via the mailing list. maxwell users are automatically subscribed to the mailing list.

We strongly recommend to self-subscribe to be informed about changes and downtimes even if you are using only group-specific resources. Self-subscriptions are moderated and might take a moment.

Acknowledging the Maxwell Cluster

If the Maxwell cluster was an important asset in your work resulting in a publication, we'd greatly appreciate your acknowledgment. There is currently no publication to refer to and feel free to formulate an acknowledgment in your favorite terms. An example could look like this:  This research was supported in part through the European XFEL and DESY funded Maxwell computational resources operated at Deutsches Elektronen-Synchrotron (DESY), Hamburg, Germany. We would definitely like to here about the publication! Please send references to publications to us, either to Frank Schluenzen or simply to the Have a look at the list of contributed publications.

Next read: 

Maxwell for Everyone and Maxwell Storage

  • No labels