Low and High Fidelity (LF/HF) Data

Hi Friends,

Thanks UQ team for the amazing tool uqlab. Please I have two fundamental questions and I will appreciate any clarity/help from members of this forum as well as any source maybe for further read.

  1. Please what is the clear difference between low and high fidelity data during UQ. This comes up alot when I read articles talking about multi-fidelity. In what ways can both data be sourced for UQ analysis - in other words, how do I know that this set of data I have is LF or HF.

  2. Is there a way of dealing with epistemic uncertainty in uqlab from the context of mixed uncertainty?

Appreciate insight on both questions. Thanks all for the support

Dear @aokoro ,

Thank you for your questions and your nice words.

Starting with your first question, data fidelity refers to the accuracy of the data, or how well it represents reality. High-fidelity (HF) data captures accurately the phenomenon or system it is intended to represent, and it is typically more expensive-to-obtain (financially or computationally) than the corresponding low-fidelity (LF) data. There are many different ways that one can use to obtain HF and LF data for a system. For example, according to [1] and [2], LF data can be acquired from simplified mathematical or numerical models, from using coarser discretisation, or from using surrogate models derived from HF models. Another way to differentiate between HF or LF data is to know if this data were obtained from experiments or from simulations. Usually, experimental data are of higher accuracy compared to data from computer simulations that are meant to describe the same system. For more information on this topic, you can read Section 4 of [1], and Section 1.3 of [2].

Moving to your second question, it is not entirely clear to me what you mean by ‚Äúdealing with epistemic uncertainty from the context of mixed uncertainty‚ÄĚ. If you seek to quantify the epistemic uncertainty in your surrogate model output due to the lack of data, this can be expressed by constructing confidence intervals for your model response. There are different ways to quantify the output uncertainty, depending, among others, on your choice of surrogate model. For example, if you use polynomial chaos expansions (PCE), you can use bootstrap for local error estimation to construct confidence intervals. You can implement this easily in UQLab by following this example: https://www.uqlab.com/pce-bootstrap. You can find out more details on Bootstrap PCE here (Section 1.7). If you use Kriging, constructing the confidence intervals to express the epistemic uncertainty is straightforward, you can have a look at this example: https://www.uqlab.com/kriging-gaussian-process-regression.

On the other hand, if what you had in mind was imprecise probability as a way to represent epistemic uncertainty, this is not available in UQLab, at least for now.

I hope this helps!

Best regards,

[1] Fern√°ndez-Godino, M. G., Park, C., Kim, N. H., & Haftka, R. T. (2016). Review of multi-fidelity models. arXiv preprint arXiv:1609.07196
[2] Peherstorfer, B., Willcox, K., & Gunzburger, M. (2018). Survey of multifidelity methods in uncertainty propagation, inference, and optimization. Siam Review , 60 (3), 550-591

1 Like

Thank you so much for the detailed explanation and references - highly appreciated.