Summary

Source: https://gitlab.com/Molcas/OpenMolcas

License:  GPL v2

Path:  /software/openmolcas/[version]/[variant]

Documentation: https://molcas.gitlab.io/OpenMolcas/sphinx/

Description: OpenMolcas is a quantum chemistry software package.

Citation:

OpenMolcas is a quantum chemistry software package developed by scientists and intended to be used by scientists. It includes programs to apply many different electronic structure methods to chemical systems, but its key feature is the multiconfigurational approach, with methods like CASSCF and CASPT2.

Using  openmolcas

openmolcas is available in two variants, as a serial and as an intel-compiled version mit openmpi support. The environment can best be initiated using the module command:

[max]% module load maxwell openmolcas                # for the OpenMP version
[max]% module load maxwell openmolcas/22.10/omp      # to be explicit
[max]% module load maxwell openmolcas/22.10/mpi      # for the openmpi, intel compiled version 

Sample batch-script for the OpenMP version (single node)

#!/bin/bash
#SBATCH --partition=maxcpu,allcpu
#SBATCH --constraint='75F3'        
#SBATCH --nodes=1            
unset LD_PRELOAD

source /etc/profile.d/modules.sh
module purge
module load maxwell openmolcas/22.10/omp

export MOLCAS_MEM=10000        # this is 10GB per process! 
export Project=openmolcas-test
export WorkDir=$PWD/tmp/work   # for example

export OMP_NUM_THREADS=16      # ... for example
export MOLCAS_NPROCS=16        # ... shouldn't be needed and should not have any impact

pymolcas test.inp > openmolcas-omp.log

# you can verify the number of threads in the logfile, which should contain lines like 
# available to each process: 10 GB of memory, 16 threads

Sample batch-script for the MPI version (multiple nodes)

#!/bin/bash
#SBATCH --partition=maxcpu
#SBATCH --constraint='[75F3]'   # set constraints when using multiple nodes
#SBATCH --nodes=2
unset LD_PRELOAD
source /etc/profile.d/modules.sh
module purge
module load maxwell openmolcas/22.10/mpi

export MOLCAS_MEM=10000        # this is 10GB per process! 
export Project=openmolcas-test
export WorkDir=$PWD/tmp/work   # for example

export OMPI_MCA_pml=ucx        # should work now. you might need to change it to OMPI_MCA_pml=ob1 on old hardware.

export MOLCAS_NPROCS=$SLURM_NNODES # for example. That's the total number of MPI processes. 
export OMP_NUM_THREADS=16          # ... for example

pymolcas -np $MOLCAS_NPROCS test.inp > openmolcas-mpi.log

# you can verify the number of threads in the logfile, which should contain lines like 
#           launched 2 MPI processes, running in PARALLEL mode (work-sharing enabled)
#                     available to each process: 10 GB of memory, 16 threads