I am trying to calculate the bootstrap PCE (bPCE) bounds on the predictions from a trained SSE metamodel. It seems this should be possible from the SSER documentation, but I don’t see a syntax or usage option in the SSE documentation.
I would like to do something similar to what’s in the last PCE example to generate the predictions, variance, and bootstrap values, but for all the partitions in an SSE model:
[YPCval,YPC_var,YPCval_Bootstrap] = uq_evalModel(myPCE,Xval);
Looking through some of the code where this seems to be implemented for SSER Pf confidence bounds, I have tried a couple of ways to modify the first SSE tutorial to output the bPCE info, but have not been successful. The ways I have tried are listed below.
Attempts in SSE case:
Adding the option specifying the number of replications to the SSE metaOpts
metaOpts.ExpOptions.Bootstrap.Replications = 100;
Then training the model, and using the tutorial output
y_model_test = uq_evalModel(x_test);which gives a vector of doubles only.
Or then trying to set the output args to match the bPCE case as `[y_model_test, temp1, temp2] = uq_evalModel(x_test);’ gives an error that the number of output args is too high (with the SSE metamodel selected).
Or trying to use the function on the SSE class that seems be used for SSER Pf:
[y_ex, yvar_ex, yrep_ex] = mySSE_sequential.SSE.evalSSE(x_test)results in an error in uq_PCE_eval of “unrecognized field name “PCE””
Is there a specific way the functionality is supposed to work for getting bPCE for an SSE metamodel?
I am using: