This page is for external Photon Science User with a scientific account. Photon Science members please read the page Maxwell for PS.
The PSX resources in Maxwell serve the Photon Science users. It consists of the PSX partition, interactive compute nodes and GPFS storage infrastructure.
For details about the setup of the partitions and compute nodes please have a look at Compute Infrastructure page, and the limits applying.
Interactive Login Nodes
Preferably use max-display.desy.de or https://max-display.desy.de:3443/ to connect to the Maxwell cluster, and connect from there to interactive compute nodes - or submit batch jobs.
Please have a look at Access and the Interactive Login. max-display is directly accessible from outside (no tunneling required).
Login nodes are always shared resources sometimes used by a large number of concurrent users. Don't run compute or memory intense jobs on the login nodes, use a batch job instead!
For other information have a look at Getting started.
Interactive Compute Nodes
Summary
- For login to CPU workgroup server connect to desy-ps-cpu.desy.de. Don't use individual node-names.
- Be aware that you’re not alone on the systems.
- max-fsc is not directly accessible from outside tunnel. Tunnel through bastion.desy.de, or use max-display.desy.de
- For login to GPU workgroup server connect to desy-ps-gpu.desy.de. Don't use individual node-names.
- Be aware that you’re not alone on the systems
- max-fsc is not directly accessible from outside tunnel. Tunnel through bastion.desy.de, or use max-display.desy.de
- Be aware that you’re not alone on the systems
- If you cannot access the system (not a member of netgroup @hasy-users), please contact fs-ec@desy.de or the beamline scientist connected to your experiment.
- If you’re missing software or for support, contact maxwell.service@desy.de or fs-ec@desy.de or talk to your beamline scientist for advice.
- dCache instances petra3 and flash1 are mounted on max-fsc/g (/pnfs/desy.de/ ; no Infiniband)
- A gpfs-scratch folder is available by /gpfs/petra3/scratch/ (no snapshot, no backup, automatic clean-up [will come]).
It's scratch space in a classical sense, i.e. for temporary data. It's shared among all users and thus you're asked to check and to clean up / free scratch disk space if no longer needed.
desy-ps-cpu.desy.de is a very limited shared resource. You can only use a few cores and limited memory. If you use more than appropriate you will affects lots of colleagues. Some applications are very effective in crashing a node. So it might well be that a machine goes turn terminating all processes of all users. It happens much more frequent on the interactive nodes than on batch nodes. And if it happens on batch nodes there will be a single user affected any noone else. Batch nodes are hence much more efficient to use than interactive nodes: you get a full machine (or multiple machine) exclusively for your job. You can use all cores and memory available. For any serious data analysis or simulations the batch component of the Maxwell cluster is your best choice!
The Photon Science Batch resource for external users
As a first step login to max-display.desy.de and check which Maxwell resources are available for your account using the my-partitions command. If it says "yes" for partition "psx" you are ready to go. If so you will also see a "yes" at least for partition "all". If not: get in touch with FS-EC.
Personal environment
The home-directory is on a network-storage (currently hosted on GPFS) located in /home/<user>:
[user@desy-ps ~]$ echo $HOME /home/user
This working directory resides on GPFS and thus you’ll have the same “home” on all nodes of the Maxwell cluster.
Your home-directory is subject to a non-extendable, hard quota of 30GB. To check the quota:
[user@max-p3a001 ~]$ mmlsquota max-home Block Limits | File Limits Filesystem type KB quota limit in_doubt grace | files quota limit in_doubt grace Remarks max-home USR 76576 0 10485760 0 none | 520 0 0 0 none core.desy.de
Storage environment
The GPFS core is mounted on the system as
/asap3/petra/gpfs
Underneath you’ll find data taken recently at PETRA-Beamlines, sorted in subfolders by beamline with additional substructure of year/type/tag, e.g. data in 2015 at beamline p00 with beamtime AppID 12345678 would reside in /asap3/petra/gpfs/p00/2015/12345678/, followed by the directory tree as given/created during beamtime. Folder “logic” is same as during beamtime, i.e. assuming you’ve been named participant for a beamtime and are granted access to the data (controlled by ACls [access control list]), you can read data from subfolder “raw” and store analysis results in scratch_cc (temporary/testing results) or processed (final results). For further details concerning folders and meaning please see subsections in confluence.desy.de, topic ASAP3.
The folder exported read-only to the Beamline PCs foreseen for documentation, macros etc. can be found in folder /asap3/petra/gpfs/common/<beamline>.
If you need large(r) amount of TEMPORARY space, please get in touch with maxwell.service@desy.de. There is beegfs space available on the cluster which is well suited for temporary data.