Issue with HPC dispatcher module: module loading and MPI

Hello everyone,

I am trying to dispatch the “uq_Example_Dispatcher_01_BasicUsage.m” to our HPC cluster, using “profile_file_template_basic.m” as a template for my profile file. The problem I am facing is twofold:

  1. I want to pass the following information to PrevCommands, so that they appear as separate lines in my slurm submission script (EnvSetup does not get written to the latter, I believe):
#SBATCH -p cclake
. /etc/profile.d/
module purge
module load rhel8/default-icl
module load anaconda/3.2019-10
module load matlab/r2021b
source ~/.bashrc

Based on page 25 of the dispatcher user manual, PrevCommands is meant to be a cell array, so I tried the following two options:

Option 1:

PrevCommands = reshape({'#SBATCH -p cclake', 'module load rhel7/default-ccl', 'module load anaconda/3.2019-10', 'module load matlab/R2021b', 'source ~/.bashrc'}, [5,1]);

Option 2:

PrevCommands = {'#SBATCH -p cclake', 'module load rhel7/default-ccl', 'module load anaconda/3.2019-10', 'module load matlab/R2021b', 'source ~/.bashrc'};

Both returned the following error:

Error using horzcat
Inconsistent concatenation dimensions because a 1-by-7 'char' array was converted to a 1-by-1 'cell' array. Consider creating arrays of the same type before concatenating.
Error in uq_Dispatcher_util_checkCommand (line 40)
        cmdName = [envCommands 'command'];

Passing a single argument works fine though, but it obviously doesn’t allow me to load matlab, anaconda, etc.

My first question is therefore: how do I specify multiple ‘PrevCommands’, so that they all appear in my slurm submission script?

  1. Our HPC cluster uses Intel MPI instead of OpenMPI. As a result, the command ‘mpirun --report-pid -np 1 ./’ in ‘’ is not recognised (in the .stderr output file I get unrecognized argument report-pid). Now, if I run the dispatcher object from the remote host (login node of HPC cluster) by typing mpirun -np 1 ./, I get the following error message:
Error in uq_remote_script (line 42)
matOutObj.Y = Y;

Error in run (line 91)
evalin('caller', strcat(script, ';'));

My second question is therefore: is it possible to execute ‘uq_remote_script.m’ on the login node (for debugging) and can I change the following lines in ‘’ to be compatible with Intel MPI (for simulation)?

mpirun --report-pid -np 1 ./

Best wishes and many thanks in advance,

Hello again,

I found out that some of our older compute nodes use OpenMPI, so my second question has been resolved.

However, if anyone knows how to pass multiple arguments to Prevcommands, so they all appear line-by-line in my slurm submission script, please let me know as this step is currently preventing me from using UQLab on the HPC.

Best wishes and many thanks for your time,

Dear @nmb29

Can you please provide a self contained minimal reproducible example for this issue?

Best regards

Dear @styfen.schaer,

Thank you for your reply. Unfortunately, I am unable to upload files to UQWorld (‘Sorry, new users can not upload attachments.’), so I sent them to your ETH email address. If we manage to resolve the issue, I will summarise the fix here as reference for others.

Best wishes,

I don’t need your actual files. Just an as small as possible script with some dummy data/functions that reproduces the error. You can post your code here.

Dear Styfen,

Please create and submit a dispatcher file from the following code:


ModelOpts.mString = 'X.*sin(X)';
ModelOpts.isVectorized = true;
myModel = uq_createModel(ModelOpts);

InputOpts.Marginals.Type = 'Uniform';
InputOpts.Marginals.Parameters = [0 15];
myInput = uq_createInput(InputOpts);

DispatcherOpts.Profile = 'myHPCProfile.m';
myDispatcher = uq_createDispatcher(DispatcherOpts);

X = uq_getSample(10);
Ydispatched = uq_evalModel(X,'HPC')

Also create an HPC profile file from the following code:

%% Authentication
Hostname = '';
Username = 'nmb48';
PrivateKey = '/home/nmb48/Documents/GitHub/PhD_Code/For_HPC/sensitivity/hpc/key_22042024';

%% Remote workspace
RemoteFolder = '/home/nmb48/rds/UQLab_workspace';

%% Remote computing environment
MATLABCommand = '/usr/local/Cluster-Apps/matlab/R2021b/bin/matlab';
%% UQLab
RemoteUQLabPath = '/home/nmb48/UQLab_Rel2.0.0/';

%% Remote environment
%EnvSetup = reshape({'. /etc/profile.d/', 'module load rhel7/default-ccl', 'module load openmpi/4.1.5/intel/b42idtrx'}, [3,1]); % @Styfen Schaer: these are the inputs I would like to parse to my slurm submission script, but I get an error
EnvSetup = {'. /etc/profile.d/'};{'module load rhel7/default-ccl'};{'module load openmpi/4.1.5/intel/b42idtrx'}; % @Styfen Schaer: this does not error, but it only parses the first argument
%PrevCommands = reshape({'#SBATCH -p cclake', 'module load rhel7/default-ccl', 'module load anaconda/3.2019-10', 'module load matlab/R2021b', 'source ~/.bashrc'}, [5,1]); % @Styfen Schaer: these are the inputs I would like to parse to my slurm submission script, but I get an error
PrevCommands = {'#SBATCH -p cclake'};{'module load rhel7/default-ccl'};{'module load anaconda/3.2019-10'};{'module load matlab/R2021b'};{'source ~/.bashrc'}; % @Styfen Schaer: this does not error, but it only parses the first argument

%% Job scheduler
Scheduler = 'slurm';

The slurm submission script I get from running the above code reads as follows:

#SBATCH --job-name=02May2024_at_15583588
#SBATCH --output=02May2024_at_15583588.stdout
#SBATCH --error=02May2024_at_15583588.stderr
#SBATCH --time=60
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1

echo Running on host `hostname`
echo Time is `date`
echo Directory is `pwd`

#SBATCH -p cclake

cd /home/nmb48/rds/UQLab_workspace/02May2024_at_15583588
mkdir logs

touch .uq_job_started
mpirun --report-pid -np 1  ./

The .stderr output file reads as follows:

[mpiexec@cpu-p-65] match_arg (…/…/…/…/…/src/pm/i_hydra/libhydra/arg/hydra_arg.c:91): unrecognized argument report-pid
[mpiexec@cpu-p-65] HYD_arg_parse_array (…/…/…/…/…/src/pm/i_hydra/libhydra/arg/hydra_arg.c:128): argument matching returned error
[mpiexec@cpu-p-65] mpiexec_get_parameters (…/…/…/…/…/src/pm/i_hydra/mpiexec/mpiexec_params.c:1313): error parsing input array
[mpiexec@cpu-p-65] main (…/…/…/…/…/src/pm/i_hydra/mpiexec/mpiexec.c:1738): error parsing parameters

I hope this helps.

Best wishes,

Dear @styfen.schaer,

Have you been able to reproduce my issue?

If you require any further information, please let me know.

Best wishes,

Hi @nmb29

Unfortunately, I haven’t had the time to set it up and run it yet, and at first glance I can’t see any error. I think you know this already and just tried different things but EnvSetup = {'...'};{'...'};{'...'}; does not work because it only assigns the first cell to EnvSetup and the others are not lost. However, I would have assumed that EnvSetup = {'...', '...', '...'}; works. Can you try putting everything in a single char array or string and separating the commands with a newline character? E.g.: EnvSetup = 'do something\ndo some more work\ndo final work';?