Page tree

Relion is a CryoEM software suite mainly targeted to  single particle analysis.

Tutorial: ftp://ftp.mrc-lmb.cam.ac.uk/pub/scheres/relion30_tutorial.pdf

Loading the environment

Relion is installed using Environment Modules.

To use it at CSSB/DESY call:

module load relion

For advanced users we have the following different versions installed:

  • relion/3.0: a generic version for GPU and CPU analysis (default).
  • relion/3.0-intel-cpuonly: a highly optimized CPU version for INTEL processors which works best when using mixed MPIs and threads.

Note:

  • To avoid environment settings interference it might be best if you start the Relion user interface with the same module as you intend to run the analysis on the cluster.
  • For performance analysis and optimal settings at the CSSB nodes have a look at the Relion benchmark section below.

CSSB-specific Notes

Each computing cluster can be configured in various ways. Since Relion offers various run-time options (e.g. GPU, CPU, MPI, threading), we have extended the Relion user interface in combination with predefined submission templates to fit to the DESY Maxwell cluster.

The following entries in the Relion Running tab are added for DESY/CSSB Maxwell cluster support:

  • CPUs per task: do not use every core so that the remaining tasks have more main memory per task on each node.
  • Constraints: specify which type of CPU or GPU you want to have. The run performs best if all used nodes have the same hardware configuration (CPU/GPU-type, amount of cores, main memory).
  • Nodes: used mainly for GPU or CPU with hybrid MPI/threading usage.

Relion3 Wizard Howto

When the CSSB nodes are all occupied, you can use the DESY all partition (queue) for computing. This will give you free nodes until the owners need their resources. Since the nodes in the all partition have different hardware specifications it can get quite tricky to calculate good performance settings. Therefore we provide a simple shell script to fill out the Relion GUI Running tab.

First look for free resources with the sinfo_detailed alias:

$ sinfo_detailed
NODELIST           PARTITION CPUS NODES NODES(A/I) AVAIL_FEATURES
max-cssb[002-010]  cssb      48   9     3/5        INTEL,Gold-6126,1536G
max-cssbg[001-003] cssbgpu   40   3     3/0        INTEL,V4,E5-2640,GPU,P100,GPUx2,512G
:
max-xxxx[101-188]  all       72   88    12/70      INTEL,Gold-6140,768G
max-xxxx[020-099]  all       80   80    28/52      INTEL,V4,E5-2698,512G
:

The output should be interpreted as follows:

  • NODELIST: name of the computing node
  • PARTITION: name of the partition (queue)
  • CPUS: amount of cores per node
  • NODES: amount of nodes with this configuration
  • NODES (A/I): amount of nodes which are allocated or idle
  • AVAIL_FEATURES: name tags of constraints used for specifying dedicated node features

Now you see that 5 of the CSSB nodes are idle and can be used. So you copy&paste the AVAIL_FEATURES information to the relion3_wizard.sh script. You can play with the options to experiment with settings. The wizard just prints information on the screen which you have to put in the Relion GUI by yourself!

Try:

relion3_wizard.sh -h


Example 1: 5 CSSB nodes with the default relion/3.0 module

$ relion3_wizard.sh -n 5 INTEL,Gold-6126,1536G

Relion 3.x Compute tab settings
======================================================================
Use GPU acceleration? : No
Which GPUs to use :

Relion 3.x Running tab settings
======================================================================
Number of MPI procs : 240
Number of threads : 1
Submit to queue? : Yes
Queue name : cssb,all
Queue submit command : sbatch
CPUs per task : 1
Constraints : Gold-6126&1536G
Nodes : 5
Standard submission script : /beegfs/cssb/software/etc/em/relion/3.0/slurm_sbatch.sh
Additional arguments :

Note
======================================================================
Usable memory per MPI proc or thread will be about: 32G


Example 2: 5 CSSB nodes with advanced socket bindings and the relion/3.0-intel-cpuonly module

$ relion3_wizard.sh -s -n 5 INTEL,Gold-6126,1536G

Relion 3.x Compute tab settings
======================================================================
Use GPU acceleration? : No
Which GPUs to use :

Relion 3.x Running tab settings
======================================================================
Number of MPI procs : 11
Number of threads : 23
Submit to queue? : Yes
Queue name : cssb,all
Queue submit command : sbatch
CPUs per task : 1
Constraints : Gold-6126&1536G
Nodes : 5
Standard submission script : /beegfs/cssb/software/etc/em/relion/3.0/slurm_sbatch_socket.sh
Additional arguments :

Note
======================================================================
Usable memory per MPI proc or thread will be about: 32G


Example 3: 2 CSSB GPU nodes

$ relion3_wizard.sh -g -n 2 INTEL,V4,E5-2640,GPU,P100,GPUx2,512G

Relion 3.x Compute tab settings
======================================================================
Use GPU acceleration?      : Yes
Which GPUs to use          : 0:1


Relion 3.x Running tab settings
======================================================================
Number of MPI procs        : 5
Number of threads          : 1
Submit to queue?           : Yes
Queue name                 : cssbgpu,allgpu
Queue submit command       : sbatch
CPUs per task              : 1
Constraints                : P100&GPUx2
Nodes                      : 2
Standard submission script : /beegfs/cssb/software/etc/em/relion/3.0/slurm_sbatch_gpu.sh
Additional arguments       : 


Note: for MotionCor2 or Gctf jobs replace
======================================================================
Number of MPI procs        : 1
Constraints                : K40X|P100|V100
Nodes                      : 1
Standard submission script : /beegfs/cssb/software/etc/em/relion/3.0/slurm_sbatch_nompi.sh

Calling the external programs MotionCor2 or Gctf works different compared to the other Relion MPI implementations (means the MPI master is also an MPI slave). Since both programs run very fast and do not need much GPU resources you can submit without MPI and to the old K40X GPUs which are most likely free all the time.

Tips & Tricks

Q: What do the Relion starfile meta data labels means (e.g. rlnAngleRot)?

You can get a list of all meta data with a short explanation by calling:

relion_refine --print_metadata_labels

Or for a subset:

relion_refine --print_metadata_labels | grep rlnAngle
relion_refine --print_metadata_labels | grep Ctf