Post-prediction Evaluation Metrics
Relative Mean-Squared Error (RelMSE)
The relative mean-squared error is calculated as:
where:
-
y_i is the actual value for the i-th observation,
-
\hat{y}_i is the predicted value for the i-th observation,
-
\bar{y} is the mean of the actual values calculated as \bar{y} = \frac{1}{N}\sum_{i=1}^{N} y_i, and
-
N is the number of data points.
Relative Root-Mean-Squared Error (RelRMSE)
The relative root-mean-squared error is calculated as:
Relative Mean Absolute Error (RelMAE)
The relative mean absolute error is calculated as:
where \text{Var}[y] is the variance of the actual values calculated as \frac{1}{N} \sum_{i=1}^{N} (y_i - \bar{y})^2.
Mean Absolute Percentage Error (MAPE)
The mean absolute percentage error is calculated as:
Coefficient of Determination (Q2)
The coefficient of determination is calculated as:
A value of 1 indicates perfect prediction, while a value of 0 indicates no predictive power.
Pre-prediction Evaluation Metrics
Relative Cross-Validation Error (RelCVErr)
The relative cross-validation error using leave-one-out (LOO) cross-validation is calculated as:
where \hat{y}_{i(-i)} is the predicted value for the i-th observation obtained by leaving out the i-th data point in the training process.
When using Polynomial Chaos Expansion (PCE) and having results from least-squares minimization, you can directly compute the Leave-One-Out (LOO) error without the need to build N separate metamodels. For more information, see the UQLab user manual – Polynomial chaos expansions, Section 1.4.2.