Structural reliability and building collapse in UQLab

Hi @voulpiotis,

Thanks for creating this discussion.
If I understand correctly, there are two goals:

  1. Creating surrogate models for a large number of outputs. In other words, your output (as a vector \boldsymbol{y}) has high dimensionality \boldsymbol{y} = (y_1,y_2,\ldots,y_M) (M is the number of outputs that you consider)
  2. Using the output vector to determine the collapse scenario (post-processing \boldsymbol{y}). Possibly, create a surrogate that directly predicts the collapse scenario without going through the first step.

I suppose the whole process is deterministic: meaning that a given set of input parameters can uniquely determine the output, and the output can provide the corresponding collapse scenario.

For the first goal, you can create a PCE for each component of the output, if each component itself is of interest. To this end, you do not need to use a loop to go through each scalar y_i. Instead, you can organize your model response as a matrix \boldsymbol{R} of size N \times M, where N denotes the number of model runs. Hence, R_{i,j} corresponds to the results of the j^{th} component of \boldsymbol{Y} at the i^{th} run. Then, you can write the following comments in UQLab:

MetaOptsMean.ExpDesign.Y = R;

UQLab will built a PCE for each component of \boldsymbol{Y}. You can also have a look at the function uq_Example_PCE_04_MultipleOutputs.m.

If some patterns in the output are helpful (or the dimensionality of the output vector is extremely high), you can post-process the output to calculate the associated values (on a basis) and then build a PCE for these quantities. For example in structural dynamics, one can use the modes to represent the structural displacements, and thus the projection of the model response can be reduced to a few coefficients of the modes. If no physics can help, you can also use some methods, such as principal component analysis, to compress the output.

For the second goal, you can directly use the output vectors represented by a series of PCEs to run Monte Carlo simulations. However, may I ask how do you classify an output vector to a collapse scenario (and the collapse area seems to be continuous rather than discrete)? If this is determined by checking a few continuous values as functions of the output vectors, I would suggest directly build a surrogate on these values instead of emulating the nodal displacements as an intermediate stage. Alternatively, you can also use support vector machines and multinomial logistic regressions to directly build a surrogate model for classifications. Support vector machines are available in UQLab for binary classification (so you can use a one-vs-all strategy).