Comparison between Leave-One-Out Cross-Validation errors

Dear all,

I would like to ask you one opinion about the Leave-One-Out Cross-validation error. Specifically, I have determined a Non-Intrusive Polynomial Chaos Expansion for two different “Outputs of interest” (I have used the same collocation points for both the outputs), The first output has a mean value (calculated by using Polynomial Chaos Coefficients) around 0.0233 and a LOO error around 0.014 and the second output has a mean value of 0.611 and a LOO error of 0.51. Considering my instinct, I would say that the first output should be better predicted than the second output because its LOO error is lower than the second one but, if I have understood, the LOO error is affected/weighted by the mean value of the response of interest and than, I cannot say at priori what is the best predicted response (if the two responses/outputs have very different numerical values). I would like to know if this consideration is right and if there exists a macro guide line to rightly interpretate the LOO error for each response . I Hope I was clear, this concept is a bit complicated to be explained for me.

Dear Simone

the leave-one-our error (and its modified version) computed in UQLab are relative values: they are normalized by the empirical variance of the data.

Thus you can compare them between different outputs. When you get 10^{-2} or less, you can usually safely use the obtained PCE. When it’s 50%, you need more investigations. I advise you to plot the values predicted by PCE vs. the true values to get a hint on what could explain this lack of accuracy.

Best regards

Bruno