Page tree

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 3 Next »

Overview of storage systems

The administration and troubleshooting slightly various across installations. The table below summarizes responsibilities. If in doubt it's always a good idea to contact

AFSDESY IT/afs/ for questions about increasing quote, general problems
dCacheDESY IT/pnfs/ to arrange (missing) mounts. for general problems
DESY beegfsDESY IT/beegfs/desy/Generating a folder: execute mk-beegfs on maxwell. Contact for all other issues.
CSSB beegfsCSSB  / DESY IT/beegfs/cssb/ for access/usage rights. for issues.
sync&shareDESY for all issues
CFEL GPFSCFEL / DESY ITsee belowDESY CFEL admins for access/usage issues. for technical issues
EXFEL GPFSEXFEL / DESY ITsee belowEXFEL admins for access/usage for technical issues
FS GPFSASAP3 Teamsee ASAP3 team for all issues.
NetappDESY IT/data/netappWill be removed in the near future. for all issues
Scratch / TMPnone/scratch /tmpUnmanaged temporary space.
HOMEDESY IT/ for all issues. Quota will not be extended!

Characteristics of storage systems substantially differ for different storage systems. Tables below give a rough overview. Contact for open questions.

FilesystemMount PointQuotaSizeLifetimetokensBackupSnapshotNetworkProtocolThroughputAvailabilityRemoteRemarks
AFS/afs/per Volumeper Volumeunlimitedyesyes1yes2ethafsslowyesyes

Good for static data/documents and

software. not good for multi-threaded applications.

BeeGFS/beegfs/desy/no 959TB

for temp data


Fast. Without backup or snapshots. Deleted files are not recoverable.. Not intended for archiving data, only for actively used data!

It's scratch space really!

CSSB BeeGFS/beegfs/cssb/no3.2PBunlimitednononoibbeegfs
CSSB onlynoFast. Without backup or snapshots. Deleted files are not recoverable. Lifetime of data is up to CSSB policy. Get in touch with CSSB admins for details!

Good for mass storage of scientific data.

Not suitable for volatile data.

sync&sharenone90TB?unlimitednononoethhttpsslowwebdav-clientyesGood for data sharing.
GPFS EXFL/gpfs/exfel/d












ibgpfs, nfs>10GB/s

FS only


Fast. Good for scientific data

from PETRA III and FLASH experiments and analysis.

/gpfs/petra3/scratchno30TB3 monthnononoFS onlyno


no501TBunlimitedno??ibgpfs, nfs


CFEL onlynoFast. Good for scientific data of CFEL.



scratch/scratchnofew GBnonenonono-localfastyesno

Scratch. Limited space and

subject to erasure without prior notice.

HOME/home on FS GPFS20GB hard30TBaccountnonoyes4ibgpfs>10GB/syesno


Any files without any associated user (account does not exist anymore) will be removed from NETAPP, BEEGFS, HOME, GPFS-FS-SCRATCH without prior notifications! 

Similar policies might apply for group-storages, please verify with your admins! if in doubt!

IMPORTANT: Snapshots

  1. For information about backup & recovery: check IT-Services
  2. AFS-snapshots are located in <afs-home>/.OldFiles
  3. GPFS Snapshots are located in /asap3/.snapshots/@<time-stamp>
  4. GPFS-Home snapshots are located in /home/.snapshots/@<time-stamp>/$USER

Where to store Scientific data

Available for everyone are AFS, BeeGFS, Netapp, Desycloud and dCache.

  • AFS is secure but suitable only for rather small data volumes. 
  • Desycloud offers significantly more space, but uploads or downloads are very slow. 
  • BeeGFS and GPFS-Scratch are fast, but exclusively for temporary data without any level of security, and subject to automatic removal of old data.
  • dCache: is the only option for long-term storage of larger amounts of data. If your group doesn't have dCache storage space but would like to "buy in", get in touch with .

In addition to generic storage resources, some group specific resources are available:

  • GPFS home-directories:  20GB hard limit, don't store data in the home-directory.
  • GPFS-FS resources: space dedicated to FS-experiments.
  • GPFS-CFEL resources: space dedicated to CFEL.
  • GPFS-EXFL resources: space dedicated to European XFEL.
  • IF your group needs large amounts of fast, secure storage space, a group-owned GPFS appliance might be a solution. Get in touch with if you need to know more.

Where to store Software

Most applications are small enough to be deployed in (almost) arbitrary locations. Suitable storage systems are

  • AFS: globally accessible space. Good for software needed on Maxwell, Desktop, BIRD likewise. Keep in mind, that restrictive ACLs will cause problems on Maxwell! AFS is not suitable for multi-host application, it will have horrible side-effects on performance!
  • Netapp: will be retired. Don't start storing anything there.
  • GPFS-home: apart from the space limitations well suited for software installations. Don't use it for group-shared installations! 
  • For Applications shared within a group: use BeeGFS group directories, AFS or group specific resources where available.
  • DO NOT dCache for software installations.  Feel free to use BeeGFS, but be aware of the limitations.

Where to store Documents

Assuming that documents are usually small 

  • AFS: secure with flexible ACLs. Globally accessible. Prime choice for documents.
  • Desycloud: well suited for sharing documents. Globally accessible. Not convenient to use in the HPC environment. No backups whatsoever, make sure to secure any valuable or sensitive data.
  • Netapp: well suited but not accessible outside the HPC site.
  • Don't store documents on BeeGFS, dCache or GPFS (except for documenting experiments).

Where to store Temporary data

  • BeeGFS: designed as scratch it's perfect for large temporary data.
  • scratch: local space, but fairly limited volumes.
  • Don't store temporary data on dCache, AFS, netapp or GPFS.

Best Place(s) for storing information - overview


Local Scratch

scientific data--++----
temporary data-+----++
  • No labels