Dear @aokoro ,
Thank you for your questions and your nice words.
Starting with your first question, data fidelity refers to the accuracy of the data, or how well it represents reality. High-fidelity (HF) data captures accurately the phenomenon or system it is intended to represent, and it is typically more expensive-to-obtain (financially or computationally) than the corresponding low-fidelity (LF) data. There are many different ways that one can use to obtain HF and LF data for a system. For example, according to [1] and [2], LF data can be acquired from simplified mathematical or numerical models, from using coarser discretisation, or from using surrogate models derived from HF models. Another way to differentiate between HF or LF data is to know if this data were obtained from experiments or from simulations. Usually, experimental data are of higher accuracy compared to data from computer simulations that are meant to describe the same system. For more information on this topic, you can read Section 4 of [1], and Section 1.3 of [2].
Moving to your second question, it is not entirely clear to me what you mean by “dealing with epistemic uncertainty from the context of mixed uncertainty”. If you seek to quantify the epistemic uncertainty in your surrogate model output due to the lack of data, this can be expressed by constructing confidence intervals for your model response. There are different ways to quantify the output uncertainty, depending, among others, on your choice of surrogate model. For example, if you use polynomial chaos expansions (PCE), you can use bootstrap for local error estimation to construct confidence intervals. You can implement this easily in UQLab by following this example: https://www.uqlab.com/pce-bootstrap. You can find out more details on Bootstrap PCE here (Section 1.7). If you use Kriging, constructing the confidence intervals to express the epistemic uncertainty is straightforward, you can have a look at this example: https://www.uqlab.com/kriging-gaussian-process-regression.
On the other hand, if what you had in mind was imprecise probability as a way to represent epistemic uncertainty, this is not available in UQLab, at least for now.
I hope this helps!
Best regards,
Katerina
[1] Fernández-Godino, M. G., Park, C., Kim, N. H., & Haftka, R. T. (2016). Review of multi-fidelity models. arXiv preprint arXiv:1609.07196
[2] Peherstorfer, B., Willcox, K., & Gunzburger, M. (2018). Survey of multifidelity methods in uncertainty propagation, inference, and optimization. Siam Review , 60 (3), 550-591