Relion is a CryoEM software suite mainly targeted to single particle analysis.
Tutorial Relion 3.0: ftp://ftp.mrc-lmb.cam.ac.uk/pub/scheres/relion30_tutorial.pdf
Tutorial Relion 3.1: ftp://ftp.mrc-lmb.cam.ac.uk/pub/scheres/relion31_tutorial.pdf
Loading the environment
Relion is installed using Environment Modules.
To use it at CSSB/DESY call:
For advanced users we have the following different versions installed:
- relion/3.1-gcc8-altcpu: a highly optimized CPU version which works best when using mixed MPIs and threads
- relion/3.1-gcc8-gpu: a generic optimized version for GPU and CPU analysis. Should be used for GPU jobs only.
- To avoid environment settings interference it might be best if you start the Relion user interface with the same module as you intend to run the analysis on the cluster.
- For performance analysis and optimal settings at the CSSB nodes have a look at the Relion 3.x Benchmark Results section.
Each computing cluster can be configured in various ways. Since Relion offers various run-time options (e.g. GPU, CPU, MPI, threading), we have extended the Relion user interface in combination with predefined submission templates to fit to the DESY Maxwell cluster.
The following entries in the Relion Running tab are added for DESY/CSSB Maxwell cluster support:
- CPUs per task: do not use every core so that the remaining tasks have more main memory per task on each node (only useful when using lots of MPIs and threads=1).
- Constraints: specify which type of CPU or GPU you want to have. The run performs best if all used nodes have the same hardware configuration (CPU/GPU-type, amount of cores, main memory).
- Nodes: used mainly for GPU or CPU with hybrid MPI/threading usage.
For Relion 3.1 we have a Relion 3.1 crYOLO integration wrapper for automated particle picking using the latest crYOLO generic models.
Maxwell cluster submission helper
The DESY Maxwell cluster has a lot of nodes in different configurations. Therefore we privide a wizard to enter the optimal settings in the Relion Running tab.
Tips & Tricks
Q: How do I enable more memory per MPI process/thread on a node?
This depends which submission template you are using. In general use less MPI processes or threads per node.
- For socket submissions reduce the amount of threads e.g. MPI procs = 17, threads = 9 try threads = 5 and leave the rest of the settings as it is.
- For MPI processes only use the CPUs per task field e.g. MPI procs = 384, threads =1, CPUs per task = 1 try CPUs per task = 2 and leave the rest of the settings as it is.
Q: How do I enable emails on cluster job statuses?
Slurm has specific parameters for this.
So copy the Relion template to you local folder, add the following lines, set your email address and use this as submission template:
Q: What do the Relion starfile meta data labels means (e.g. rlnAngleRot)?
You can get a list of all meta data with a short explanation by calling:
Or for a subset: