After the testbeam the data was given a flag ("OfflineQualityCheck") according to the following criteria.

The flags are:

  • good (= can be used for analysis)
  • check (= outlier; should be investigated further, might be useful for specific analysis)
  • bad (= should not be used for analysis)

A detailed explanation on the different flags for different particle types is given below.


Pion Testbeam Data

For flagging each run, we had a look at the distributions of the number of hits (nHits) and the energy sum (eSum) of each 'standard' run after the testbeam.

We were looking for outliers in the distributions of different runs of the same energy. If an distributions in clearly an outlier that cannot be explained by a significant detector position change or a change in the beam line it is marked as 'check'. 

More specifically we are looking at the peak bin position (based on a binning of 100) and compare different runs with the same energy. Once a run differs by more than (+-) 6% of the mean of the runs it is marked as "check". 

(these runs were marked as 'good' even though the distributions look quite different than expected): 

  • May: 10 GeV, because of large electron contamination
  • May: 15 GeV, because of low statistic and electron contamination (larger binning for histograms necessary)
  • June: 10, 30 & 60 GeV: Were position scan runs - the center position was marked as 'good', but the the other positions as 'good; SCAN' as the calibration constants are off for some detector parts at the moment (August 2018) - these runs need to be rechecked after a new iteration of calibration constants

'Check' runs:

Have not been rechecked at the moment (August 2018); need to be further investigated. Makes most sense to do so after a new iteration of calibration constants. 

Root macros:

The root macros used to create the plots for this offline quality check can be found on the git / stash in the repository "calice_ROOTmacros". 

Electron Testbeam Data

  • No labels