0 votes
ago by (220 points)

Our HPC uses SLURM as a resource manager and on our old HPC cluster we used the following job script:

#!/usr/bin/bash
#SBATCH --ntasks=8
#SBATCH --account=
[…]
#SBATCH --partition=
[…]
#SBATCH --cpus-per-task 1
#SBATCH --time=30:00

export OMP_NUM_THREADS=1

module load OpenMPI

mpirun /home/user/.local/easybuild/software/openCARP/17.0-foss-2023b/bin/openCARP +F MurineMouseCalibratedCV_BlockModel_v17.par

scontrol show job $SLURM_JOB_ID
scontrol write batch_script $SLURM_JOB_ID –

On our new HPC the output looked as there was a problem with using OpenMPI and it seemed that the model ran 8 times in parallel and was very slow in computing.

1 Answer

0 votes
ago by (220 points)

We could solve the problem by modifying the script as follows: 

Instead of

module load OpenMPI
mpirun /home/user/.local/easybuild/software/openCARP/17.0-foss-2023b/bin/openCARP +F MurineMouseCalibratedCV_BlockModel_v17.par

use

module use /home/user/.local/easybuild/modules/all
module load openCARP/17.0-foss-2023b
mpirun /home/user/.local/easybuild/software/openCARP/17.0-foss-2023b/bin/openCARP +F MurineMouseCalibratedCV_BlockModel_v17.par

resulting in a final script: 

#!/usr/bin/bash
#SBATCH --ntasks=8
#SBATCH --account=[…]
#SBATCH --partition=[…]
#SBATCH --cpus-per-task 1
#SBATCH --time=0:30:00
export OMP_NUM_THREADS=1

module use /home/user/.local/easybuild/modules/all
module load openCARP/17.0-foss-2023b

mpirun /home/user/.local/easybuild/software/openCARP/17.0-foss-2023b/bin/openCARP +F MurineMouseCalibratedCV_BlockModel_v17.par

scontrol show job $SLURM_JOB_ID
scontrol write batch_script $SLURM_JOB_ID -

Welcome to openCARP Q&A. Ask questions and receive answers from other members of the community. For best support, please use appropriate TAGS!
architecture, carputils, documentation, experiments, installation-containers-packages, limpet, slimfem, website, governance
...