Thank you for your kind reply
I read the Surrogate-assisted RBDO you said.
But what I said is, like Data driven PCE, it’s a way of optimizing only with Data.
However, Surrogate-assisted RBDO seems to be a case of knowing the limit state function of the system in advance, not using only data. However, I thought that the limit state function may not be known in a complex system.
Is data driven (Only surrogate) RBDO (not Surrogate-assisted) possible?
UQLab ’s example
design variable: d1, d2 (gaussian distribution with only standard deviation)
cost function = d1 + d2
and three limit state function
2- (1) When declaring the distribution of design variable, is there any reason to use only standard deviation without using mean?
2- (2) When using PCE, the code is shown below.
PCEOpts. Degree = 2:10;
PCEOpts.ExpDesign.NSamples = 10;
PCEOpts.ExpDesign.Sampling = ‘LHS’;
Is it correct to see that the above code extracts 10 experimental designs from the range of d1 and d2 initially declared as gaussian using LHS technique and builds a PCE model?
If yes, is it possible to establish a Data driven PCE model if you already have a lot of data?
As stated above, it was declared that the optimal solution was derived by adding up to 50 data. If you look at the results in the figure below, you can see that it ends in 9 samples. Then, is it possible that PCE is the optimal solution among the three methods (SVR, PCE, and Kriging)?
- The figure above is the result of an example.
The cost (Fstar) is almost the same, but the optimal design and constraints at solution are different. In my opinion, if all three methods resulted in an optimal solution, the cost, optimal design and constraints at solution should be similar. How can you tell which method led to the best solution?