ASAP3 : Directory Structure


as seen from a machine at the beamline

beamline file-system
/gpfs/
├── local
├── current
|   ├── raw
|   ├── processed
|   ├── shared
|   └── scratch_bl
└── commissioning
    ├── raw
    ├── processed
    ├── shared
    └── scratch_bl
 
/common/
└── <beamline>

/bl_documents

as seen from maxwell

core file-system
/asap3/
└── <facility>
    └── gpfs
        ├── <beamline>
		|	├── <year>
		|		├── data
	    |       |    └── <beamtime-ID>
    	|       |       ├── raw
        |	    |       ├── processed
        |    	|       ├── shared
        |       |	    └── scratch_cc
		|		└── commissioning
        |        	└── <commissioning-tag>
        |            	|── raw
        |            	├── processed
        |            	├── shared
        |            	└── scratch_cc
        └── common
            └── <beamline>

Description

  • 'local': 
    • A 3 TiB local share of the beamline file-system which stays there independently of the beamtimes
    • It is managed by the beamlines themselves
    • There is no syncing across to the core file-system.
    • It serves as a local buffer.
    • Anybody on the beamline can read/write from/to it
    • Cannot be accessed from the core file-system
  • 'current':
    • Non-permanent share from the beamline file-system
    • It will appear when a beamtime is started and disappear if it is stopped
    • Meaning of the different directories:

      • raw: 
        • For raw data
        • will be 
          • migrated into the core file-system
          • written to tape
          • shown in the Gamma-portal
      • processed
        • For meaningful processed data
        • will be 
          • migrated into the core file-system
          • written to tape
          • shown in the Gamma-portal
      • shared
        • For user specific macros, scripts, metadata, text-files,...
        • will be
          • migrated into the core file-system
          • written to tape
          • shown in the Gamma-portal
      • scratch
        • Meant for 
          • temporarily data
          • data it is not known in advance if is meaningful or not. That data can be written here and if its meaningful copied into 'processed' afterwards.
        • will not be migrated into the core file-system
  • 'commissioning'
    • Same directory structure as 'current' but for commissioning runs
    • Commissioning runs are limited to a 1 TiB hard quota
    • No ACL management or data export via Gamma Portal
    • No data archival
  • 'common':
    • Mounted read-only in the beamline space
    • Its world-readable meaning that all beamlines can read it
    • Meant for documentation, macros, ... provided to the users by the beamline staff
    • If users has to edit something, like a macro, they can copy it into 'shared'
    • Can be changed from the core side by the beamline staff
  • bl_documents:
    • Mounted read-write in the beamline space with 1TiB hard quota per beamline
    • only the beamline specific part is mounted
    • stays there independent of the beam times
    • has 2 snapshots daily, 28 (4 weeks) are kept.
    • For scripts and documentation
    • Is not visible from Maxwell and will remain so


Description

  • Access only with a valid DESY or Science Account
  • '<facility>'
    • Determines the facility, where the data has been acquired
    • Supported facilities
      • petra3
      • flash
      • spec.instruments
      • fs-ds-agipd
      • fs-ds-percival
      • fs-flash-b
      • fs-flash-o
    • The remaining directory structure is identical between all facilities
    • the scratch_cc (scratch space for computer centre) will never be transferred to the archive, that means that if a beam times data are taken offline (removed from gpfs) the scratch_cc folder is irretrievably lost.



Attachments:

Storage Architecture.graphml (application/octet-stream)
Storage Architecture.png (image/png)