Hello everyone,
I am trying to dispatch the “uq_Example_Dispatcher_01_BasicUsage.m” to our HPC cluster, using “profile_file_template_basic.m” as a template for my profile file. The problem I am facing is twofold:
- I want to pass the following information to
PrevCommands
, so that they appear as separate lines in my slurm submission script (EnvSetup
does not get written to the latter, I believe):
#SBATCH -p cclake
. /etc/profile.d/modules.sh
module purge
module load rhel8/default-icl
module load anaconda/3.2019-10
module load matlab/r2021b
source ~/.bashrc
Based on page 25 of the dispatcher user manual, PrevCommands
is meant to be a cell array, so I tried the following two options:
Option 1:
PrevCommands = reshape({'#SBATCH -p cclake', 'module load rhel7/default-ccl', 'module load anaconda/3.2019-10', 'module load matlab/R2021b', 'source ~/.bashrc'}, [5,1]);
Option 2:
PrevCommands = {'#SBATCH -p cclake', 'module load rhel7/default-ccl', 'module load anaconda/3.2019-10', 'module load matlab/R2021b', 'source ~/.bashrc'};
Both returned the following error:
Error using horzcat
Inconsistent concatenation dimensions because a 1-by-7 'char' array was converted to a 1-by-1 'cell' array. Consider creating arrays of the same type before concatenating.
Error in uq_Dispatcher_util_checkCommand (line 40)
cmdName = [envCommands 'command'];
Passing a single argument works fine though, but it obviously doesn’t allow me to load matlab, anaconda, etc.
My first question is therefore: how do I specify multiple ‘PrevCommands’, so that they all appear in my slurm submission script?
- Our HPC cluster uses Intel MPI instead of OpenMPI. As a result, the command ‘mpirun --report-pid mpirun.pid -np 1 ./mpifile.sh’ in ‘qfile.sh’ is not recognised (in the .stderr output file I get
unrecognized argument report-pid
). Now, if I run the dispatcher object from the remote host (login node of HPC cluster) by typingmpirun -np 1 ./mpifile.sh
, I get the following error message:
Error in uq_remote_script (line 42)
matOutObj.Y = Y;
Error in run (line 91)
evalin('caller', strcat(script, ';'));
My second question is therefore: is it possible to execute ‘uq_remote_script.m’ on the login node (for debugging) and can I change the following lines in ‘qfile.sh’ to be compatible with Intel MPI (for simulation)?
mpirun --report-pid mpirun.pid -np 1 ./mpifile.sh
Best wishes and many thanks in advance,
Nils