RBDO analysis design variable range

:grin: :grin: :grin:
First, I really appreciate the team for this amazing software uqlab. Thanks to @moustapha for the input on the RBDO module. I have a few questions that I wish to find out, including:
1) For design variables, can I define margins? Because I constructed the meta-model to replace the time-consuming numerical simulations, when I checked the experimental design, I found that many values were larger than the marginal value, I tried to give it a range, but it didn’t work…
I want to set the range because the augmented space is determined by the 95% confidence interval and the spatial range is larger than the range I defined (e.g. the augmented space according to the UQLab RBDO code is [50,100], but in reality, the range of the design variable is [60, 70]). A large augmented space will introduce some noise points and unnecessary regressions, which will reduce the accuracy of the meta-model and increase the construction points of the meta-model.
2) 3 criteria for meta-model construction are provided to improve the accuracy, but when I use the built PCK meta-model to predict others, the accuracy is not very good, the leave-one-out error is around 0.03, I want to use the leave-one-out error as PCK Convergence criterion for auxiliary RBDO, can it work?
Looking forward to your reply. Thank you in advance.

Cordially,
Tingting

Dear @ttz ,

Thank you for your message.

1- You can define the quantiles for the augmented space by using the command RBDOpts.AugSpace.DesAlpha (please have a look at Table 13, p. 40 for more details). The default value is 1%. If you want to reduce the size of the augmented space, you should provide a smaller value. I would recommend that you check the problem formulation though, it is odd to me that reasonable quantiles of the random variables associated to the design parameters cover 3 times the range of the design space. The uncertainty on a design variable is supposed to represent small variability around its nominal value.

  1. In active learning, the accuracy of the surrogate is measured with respect to the limit-state surface. The idea is to have a small probability of misclassification within any reliability analysis throughout the optimization process. It is not a good idea to check the global accuracy of the surrogate model within an active learning appraoch. If you really want to have a small LOO error, then I would suggest not using active learning but building the surrogate with a large ED and use it in a static scheme. If you want to reach a target LOO error, you may enrich the ED by batch of LHS samples.

I hope this helps.

Best,
Moustapha