DUST/Scratch Space
/nfs/dust/YOURVO/...
- fast central scratch space
- shared resource disk space
- scratch means: No backup!
- shared between nodes
- mounted over the network
- shared resource network
- no long term storage
- expect clean-up requests for ancient files there
- your available quota is managed by your admin
AFS
/afs/desy.de/
- The "usual" DESY Home Directory
- AFS-Home is back upped!
- (shared) network file system
- shared resource network
- long usage history
- not fit in any way for heavy or parallel reads or writes
- shared resource performance
- do not use it for anything involving many files or very large files
How many files can you have per directory in AFS?
http://docs.openafs.org/Reference/8/dafileserver.html
You can have 64,000 files in an AFS directory if the filenames are all less than 16 characters long.
If the filenames are between 16 and 32 characters than this number decreases. There are 64,000 slots per directory.
Each file < 16 Characters takes 1 slot.
Each file > 16 and <32 takes 2 slots, etc...
In real world use, the maximum number of objects in an AFS directory is usually between 16,000 and 25,000, depending on the average name length. - e.g., compiling on AFS might make you cry
- while other domains, e.g., /afs/cern.ch, are still available, support is fading away
- don't expect your favourite site to be around for long
- CERN is aiming to drop support soon
DUST and AFS quota check
- Use AMFORA to view quota and usage.
- Only works from within DESY and NAF network, please use your usual credentials
- Select "my resources" on the left
- For DUST, you can check your quota from the interactive login servers with the command
my-dust-quota
dCache
/pnfs/desy.de/YOURVO/...
- long term storage for results data
- large amounts of storage space ~PB
- not intended for volatile data
- mounted over network
- read-only mounted: only reads and no writes possible via the network path
- not suited for in-place writes, i.e., modifying an existing file, as it is more close to an object store in its behaviour
- usage pattern depending on your experiment/VO policies
- ask your experiment admins, how you are supposed to use it
- writes might be possible via your experiments grid frameworks
- ask your experiment admins how to
CVMFS
/cvmfs/YOURVO.DOMAIN.FOO
- virtual file system ("repositories") for distributing static files, e.g., standard binaries
- E.g., experiment environments as
/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase/user/atlasLocalSetup.sh
- E.g., experiment environments as
- when available subdirectories and files are automatically loaded and cached during first access
- E.g., if not present under /cvmfs an existing repository as
/cvmfs/singularity.opensciencegrid.org
will be automatically mounted during the first access
- E.g., if not present under /cvmfs an existing repository as
- each repository is centrally managed at the corresponding experiment or VO
- read only
local home directory
${HOME}
- On the WGS, $HOME points to your AFS-Home
- On the batch WNs, $HOME is not set. Your AFS-Home directory is however fully accessible
temporary directory
${TMP}
- local temporary directory
- → fast I/O
- I/O shared just between the other users/jobs on the same node
- limited space
- no long term storage - your job's lifetime
- → fast I/O
- access it via the environment variable $TMP
- might also be available under the environment variable $TMPDIR - check if the variable has a value before using it!),
- don't assume any absolute paths as you might upset your sysadmins
- good style
- create temporary directories or files under your user name
- on the shell, a temporary directory can be created with
mktemp -d -t ${USER}.XXXXXXXXXX
which is automatically put in the ${TMPDIR} directory - similarly to create a temporary file
mktemp -t ${USER}.XXXXXXXXXX