Postprediction Evaluation Metrics
Relative MeanSquared Error (RelMSE)
The relative meansquared error is calculated as:
where:

y_i is the actual value for the ith observation,

\hat{y}_i is the predicted value for the ith observation,

\bar{y} is the mean of the actual values calculated as \bar{y} = \frac{1}{N}\sum_{i=1}^{N} y_i, and

N is the number of data points.
Relative RootMeanSquared Error (RelRMSE)
The relative rootmeansquared error is calculated as:
Relative Mean Absolute Error (RelMAE)
The relative mean absolute error is calculated as:
where \text{Var}[y] is the variance of the actual values calculated as \frac{1}{N} \sum_{i=1}^{N} (y_i  \bar{y})^2.
Mean Absolute Percentage Error (MAPE)
The mean absolute percentage error is calculated as:
Coefficient of Determination (Q2)
The coefficient of determination is calculated as:
A value of 1 indicates perfect prediction, while a value of 0 indicates no predictive power.
Preprediction Evaluation Metrics
Relative CrossValidation Error (RelCVErr)
The relative crossvalidation error using leaveoneout (LOO) crossvalidation is calculated as:
where \hat{y}_{i(i)} is the predicted value for the ith observation obtained by leaving out the ith data point in the training process.
When using Polynomial Chaos Expansion (PCE) and having results from leastsquares minimization, you can directly compute the LeaveOneOut (LOO) error without the need to build N separate metamodels. For more information, see the UQLab user manual – Polynomial chaos expansions, Section 1.4.2.