Page tree

This page is for member of the FS groups, in particular for all member of the netgroup hasy-users. External Photon Science User (Petra3, FLASH) please read the page for external Photon Science Users.

The Interactive Photon Science resource in Maxwell

The Maxwell cluster contains two distinct Photon Science areas: an entirely interactive part and a conventional batch part (see below).  To use the interactive part, connect to or - if you need a GPU for your calculations -


  • For login to CPU workgroup server connect to Don't use individual node-names.
    • Be aware that you’re not alone on the systems
  • For login to GPU workgroup server connect to Don't use individual node-names.
    • Be aware that you’re not alone on the systems
  • If you cannot access the system (not a member of netgroup @hasy-users), please contact
  • If you’re missing software or for support, contact (and in cc:
  • dCache instances petra3 and flash1 are mounted on max-fsc/g (/pnfs/ ; no Infiniband)
  • A gpfs-scratch folder is available by /gpfs/petra3/scratch/ (approx 30T, no snapshot, no backup, automatic clean-up [will come]).
    It's scratch space in a classical sense, i.e. for temporary data. This space is used by default as temporary folder for Gaussian and you can create a folder for yourself (e.g named after your account) for other purposes. It's shared among all users and thus you're ask to check  (in particular in view of gaussian) and to clean up / free scratch disk space if no longer needed.

Currently you can use the resource without scheduling (exclusive usage would limit the resource to 11 users in parallel). We will monitor/see how it goes without such a scheduling measure. In case of ‘crossfire’ / users disturbing each other we have the option to put a scheduling similar to HPC or Maxwell core (i.e. SLURM) in front of the resource, the how / in which way needs to be discussed then within photon science (e.g. scheduler only for max-fsg or for max-fsg and 50% of max-fsc or  ….).

As mentioned max-fsc, max-fsg is a shared resource. You can only use a few cores and limited memory. If you use more than appropriate you will affects lots of colleagues. Some applications are very effective in crashing a node. So it might well be that a machine goes turn terminating all processes of all users. It happens much more frequent on the interactive nodes than on batch nodes. And if it happens on batch nodes there will be a single user affected any noone else. Batch nodes are hence much more efficient to use than interactive nodes: you get a full machine (or multiple machine) exclusively for your job. You can use all cores and memory available.

Hardware environment

The interactive photon science (FS) maxwell resource consists currently of 11 nodes. The full list is available on the Maxwell Hardware page. max-fsc, max-fsg consist of the 11 nodes named max-p3a... and labelled as WGS.

The Photon Science Batch resource in Maxwell

As a first step login to max-fsc and check which Maxwell resources are available for your account using the my-partitions command:

[@max-fsc ~]$ my-partitions 

      Partition   Access   Allowed groups                                                                               
            all      yes   all                           <------- will be available if any of the resources below is "yes!
           cfel       no   cfel-wgs-users                
            cms       no   max-cms-uhh-users,max-cms-desy-users
       cms-desy       no   max-cms-desy-users            
        cms-uhh       no   max-cms-uhh-users             
           cssb       no   max-cssb-users                
      epyc-eval       no   all                           
          exfel       no   exfel-wgs-users               
      exfel-spb       no   exfel-theory-users,school-users
       exfel-th       no   exfel-theory-users            
   exfel-theory       no   exfel-theory-users            
     exfel-wp72       no   exfel-theory-users            
        fspetra       no   max-fspetra-users             
           grid       no   max-grid-users                
           jhub       no   all                           
        maxwell      yes   maxwell-users,school-users       <------- might be granted if you have suitable applications
            p06       no   max-p06-users                 
         petra4       no   p4_sim                        
             ps      yes   max-ps2-users                    <------- look for this one
            psx       no   max-psx2-users                
            uke       no   max-uke-users                 
           upex       no   upex-users,school-users       
     xfel-guest       no   max-xfel-guest-users,p4_sim   
        xfel-op       no   max-xfel-op-users             
       xfel-sim       no   max-xfel-sim-users            

* As member of netgroup hasy-users you can (also) use max-fsc and max-fsg for interactive logins
* As member of netgroup psx-users you can (also) use desy-ps-cpu and desy-ps-gpu for interactive logins

If it says "yes" for partition "ps" you are ready to go. If so you will also see  a "yes" at least for partition "all". If not: get in touch with FS-EC! Let's assume that you've got the ps-resource. The ps-resource also offers great additional opportunities for remote login. It entitles you to use the max-display nodes to connect to the maxwell cluster via your browser or the fastx-client. 

Apart from that: If you have an application, which is started by a script called my-application, and doesn't require a GUI, you can simply submit the script as a batch-job:

[@max-fsc ~]$ sbatch --partition=ps --time=12:00:00 --nodes=1 my-application
Submitted batch job 1613895

# the job might already be running
[@max-fsc ~]$ squeue -u $USER
           1614464        ps   my-app     user  R       0:06      1 max-wn052
# Status of the job                             R: running. PD: pending

This works for any application smart enough not to strictly require an X-environment, matlab, comsol, ansys, mathematica, idl and many others can be executed as batch jobs. To make it more convenient you can add the SLURM directives directly into the script:

sbatch script
[@max-fsc ~]$ cat my-application
#SBATCH --partition=ps
#SBATCH --time=1-12:00:00      # request 1 day and 12 hours
#SBATCH --mail-type=END,FAIL   # send mail when the job has finished or failed
#SBATCH --nodes=1              # number of nodes
#SBATCH --output=%x-%N-%j.out  # per default slurm writes output to slurm-<jobid>.out. There are a number of options to customize the job 
[...] # the actual script.

The email-notification will be sent to <user-id> That should always work, so you don't actually need to specify an email-address. If you do, please make sure it's a valid address. For further examples and instructions please read Running Jobs on Maxwell

If you think that it's much to complicated to write job-scripts or if you can't afford to invest the time to look into it: we are happy to assist. Please drop a message to, we'll try our best. 

Running interactive batch jobs

If you absolutely need an interactive environment, X-windows features like a GUI, there are options to do that in the batch environment. For example:

# request one node for 8 hours:
[@max-fsc ~]$ salloc --nodes=1 --time=08:00:00 --partition=all
salloc: Pending job allocation 1618422
salloc: job 1618422 queued and waiting for resources
salloc: job 1618422 has been allocated resources
salloc: Granted job allocation 1618422
salloc: Waiting for resource configuration
salloc: Nodes max-p3ag022 are ready for job

# now you got a node allocated. So you can ssh into the node
[@max-fsc ~]$ ssh max-p3ag022 
[@max-p3ag022 ~]$ # run your application!
[@max-p3ag022 ~]$ exit # this terminates the ssh session, it does NOT terminate the allocation
Connection to max-p3ag022 closed.
[@max-fsc ~]$ exit
salloc: Relinquishing job allocation 1618422
# now your allocation is finished. If in doubt use squeue -u $USER or sview to check for running sessions!

There are a few things to consider:

  • Interactive jobs with salloc easily get forgotten, leaving precious resources idle. We do accounting and monitoring! If the node utilization is low on the ps partition over some time you will start to get annoying emails. If it stays like this, your allocation might be removed. On the all partition we will let your job run nevertheless. If there is an urgent need the job will be terminated by competing jobs. So to be nice to you colleagues please considering using the all partition for interactive jobs. Keep the time short: there is hardly a good reason to run an interactive jobs for more than the working hours. Use a batch job instead.
  • Terminate allocations as soon as the job is done!

Hardware Environment

For an overview of available compute nodes have a look at:

Other Maxwell Resources

Being member of FS and maybe having access to the PS partition doesn't need to be the end of the story. If you have parallelized applications suitable for the Maxwell cluster you can apply for the Maxwell resource like everyone else on campus. Please send a message to briefly explaining your use case. Being also a user of the European XFEL you might also have access to the UPEX partition. You can easily distribute your job over the partitions:

multiple partitions
[@max-fsc ~]$ cat my-application
#SBATCH --partition=ps,maxwell,upex,all
#SBATCH --time=1-12:00:00      # request 1 day and 12 hours
#SBATCH --mail-type=END,FAIL   # send mail when the job has finished or failed
#SBATCH --nodes=1              # number of nodes
#SBATCH --output=%x-%N-%j.out  # per default slurm writes output to slurm-<jobid>.out. There are a number of options to customize the job 
[...] # the actual script.

The partition will be selected from ps OR maxwell OR upex OR all starting with the highest priority partition. So your job will run on the ps partition if nodes are available, on the maxwell partition if nodes are available and finally on the all partition if none of the other partitions specified have free nodes. Keep in mind that you should however select the partition according to the type of work you are doing. And a job can never combine nodes from different partitions, so check the limits applying to partitions.

General Environment

Personal environment

The home-directory is not the usual AFS but on a network-storage (currently hosted on GPFS) located in /home/<user>:

[user@max-p3a001 ~]$ echo $HOME

This working directory resides on GPFS and thus you’ll have the same “home” on all workgroup servers belonging to the new resource. Please note that this “working directory” should NOT be considered as replacement for your AFS or Win Home-directory. In particular the home-directory on max-fs is NOT in backup! 

The usual afs-environment will not be available on max-fs! However, upon login you will obtain an afs-token and kerberos-ticket:

[user@max-p3a001 ~]$ tokens
Tokens held by the Cache Manager:
User's (AFS ID 3904) tokens for [Expires Jun 20 09:40]
   --End of list--

[user@max-p3a001 ~]$ klist
Ticket cache: FILE:/tmp/krb5cc_9999_4GphrK1aA2
Default principal: user@DESY.DE
Valid starting       Expires              Service principal
06/19/2015 11:01:51  06/20/2015 09:40:08  krbtgt/DESY.DE@DESY.DE
	renew until 06/21/2015 09:40:08
06/19/2015 11:01:51  06/20/2015 09:40:08  afs/
	renew until 06/21/2015 09:40:08

Tokens and tickets will expire after about 24h. In contrast to AFS-homes, accessing the home-directory does not require tokens or tickets, which means that long-running jobs will continue after token expiry. Just ensure not to introduce dependencies into your AFS-home. If needed, tokens and tickets can be renewed as usual with e..g:

[user@max-p3a001 ~]$ k5log -tmp

Your home-directory is subject to a non-extendable, hard quota of 20GB. To check the quota:

[user@max-p3a001 ~]$ mmlsquota max-home
                         Block Limits                                    |     File Limits
Filesystem type             KB      quota      limit   in_doubt    grace |    files   quota    limit in_doubt    grace  Remarks
max-home   USR           76576          0   10485760          0     none |      520       0        0        0     none

Software environment

The systems are set up with CentOS 7.x (RedHat derivate) and provide common software packages as well as some packages in particular of interest for photon science (e.g ccp4 [xds], conuss, fdmnes, phenix or xrt).

An overview of software available can be found here (no confluence account necessary for reading):

Some of the software can be executed directly, some packages are NOT available out-of-the-box and have to be loaded via module before you can use them. Which package to load (if necessary), how to load it and additional comments are given in confluence as well (see for example subsection In case software is missing please let us know (email to and – until end of August – in cc:

Currently, for login to the machines the accounts need the resource (netgroup) “hasy-users”, which most but not all of FS staff accounts get by default. In case you cannot log into the resource please contact .

Storage environment

The GPFS core is mounted on the system as


Underneath you’ll find data taken recently at PETRA-Beamlines, sorted in subfolders by beamline with additional substructure of year/type/tag, e.g. data in 2015 at beamline p00 with beamtime AppID 12345678 would reside in /asap3/petra/gpfs/p00/2015/12345678/, followed by the directory tree as given/created during beamtime. Folder “logic” is same as during beamtime, i.e. assuming you’ve been named participant for a beamtime and are granted access to the data (controlled by ACls [access control list]), you can read data from subfolder “raw” and store analysis results in scratch_cc (temporary/testing results) or processed (final results). For further details concerning folders and meaning please see subsections in, topic ASAP3.

The folder exported read-only to the Beamline PCs foreseen for documentation, macros etc. can be found in folder /asap3/petra/gpfs/common/<beamline>.

AFS is fully available on all max-nodes. 

If you need large(r) amount of TEMPORARY space, please get in touch with There is beegfs space available on the cluster which is well suited for temporary data.


  • No labels