It’s been a while since I’ve posted anything here, so here goes nothing…
We just released version 0.1.0 (yes, it’s still under development) of UQTestFuns, an open-source Python3 library of test functions typically used within the applied uncertainty quantification (UQ) community.
Here is the project page on GitHub and the documentation site on ReadTheDocs. If you’re somehow a Python user, developing a UQ analysis method, and have some time, come visit us; let us know what you think. We’ll appreciate it!
What are UQ test functions?
You’ve seen some of them before: Ishigami, Borehole, Sobol’-G, and many more. If you use UQLab, the examples typically make use of such test functions to demonstrate its capabilities. Here in UQWorld, there are a few more already posted. There was also an attempt to implement and collect test functions in Python announced here some time ago.
In particular, the package aims to provide:
- a lightweight implementation (with minimal dependencies: NumPy and SciPy) of many test functions available in the UQ literature
- a single entry point (combining models and their probabilistic input specification) to a wide range of test functions implementations in Python
- an opportunity for an open-source contribution where new test functions are added and reference results are posted.
We think the applied UQ community in Python is a healthy one with many capable frameworks already available: OpenTurns, UQPy, (not to mention) UQ[py]Lab, etc. As new methods and framework are being continuously developed in the language, it may be worthwhile having a library of UQ test functions specifically written in Python.
In this post, I’d like to introduce the package a bit as well as the motivation behind it.
Let’s assume you manage to install UQTestFuns (thanks!). To see which test functions are currently available:
>>> import uqtestfuns as uqtf >>> uqtf.list_functions() No. Constructor Spatial Dimension Application Description ----- ------------------ ------------------- -------------------------- ---------------------------------------------------------------------------- 1 Ackley() M optimization, metamodeling Ackley function from Ackley (1987) 2 Borehole() 8 metamodeling, sensitivity Borehole function from Harper and Gupta (1983) 3 DampedOscillator() 8 metamodeling, sensitivity Damped oscillator model from Igusa and Der Kiureghian (1985) 4 Flood() 8 metamodeling, sensitivity Flood model from Iooss and Lemaître (2015) 5 Ishigami() 3 sensitivity Ishigami function from Ishigami and Homma (1991) 6 OTLCircuit() 6 metamodeling, sensitivity Output transformerless (OTL) circuit model from Ben-Ari and Steinberg (2007) 7 OakleyOHagan1D() 1 metamodeling One-dimensional function from Oakley and O'Hagan (2002) 8 Piston() 7 metamodeling, sensitivity Piston simulation model from Ben-Ari and Steinberg (2007) 9 SobolG() M sensitivity Sobol'-G function from Saltelli and Sobol' (1995) 10 Sulfur() 9 metamodeling, sensitivity Sulfur model from Charlson et al. (1992) 11 WingWeight() 10 metamodeling, sensitivity Wing weight model from Forrester et al. (2008)
I admit, 11 test functions do not make anything a library; this is just a start. Oh well…
Now let’s say we want to create an instance of the nine-dimensional sulfur model, the following snippet:
>>> my_testfun = uqtf.Sulfur()
creates the test function. The resulting instance is a Callable. For all intent and purposes, think of it like a function; it takes inputs and produces outputs:
>>> import numpy as np >>> xx = np.array([[76.33260267, 0.99823932, 5.31944318, 6.53120488, 0.34230479, 1.43336075, 0.49557752, 0.42685041, 0.79445003]]) >>> my_testfun(xx) array([-2.39393826])
What (I think) is unique about UQ test functions is the probabilistic specification of the inputs; the results of a UQ analysis depend on the specification of the input so it is (or, at least, should be) an integral part of the test function. In UQTestFuns, the probabilistic input model is attached to a test function (alas, no specification of dependent inputs, yet):
>>> print(my_testfun.prob_input) Name : Sulfur-Penner-1994 Spatial Dim. : 9 Description : Probabilistic input model for the Sulfur model from Penner et al. (1994). Marginals : No. Name Distribution Parameters Description ----- -------- -------------- ------------------------- ---------------------------------------------------------------------------------- 1 Q lognormal [4.26267988 0.13976194] Source strength of anthropogenic Sulfur [10^12 g/year] 2 Y lognormal [-0.69314718 0.40546511] Fraction of SO2 oxidized to SO4(2-) aerosol [-] 3 L lognormal [1.70474809 0.40546511] Average lifetime of atmospheric SO4(2-) [days] 4 Psi_e lognormal [1.60943791 0.33647224] Aerosol mass scattering efficiency [m^2/g] 5 beta lognormal [-1.2039728 0.26236426] Fraction of light scattered upward hemisphere [-] 6 f_Psi_e lognormal [0.53062825 0.18232156] Fractional increase in aerosol scattering efficiency due to hygroscopic growth [-] 7 T^2 lognormal [-0.54472718 0.33647224] Square of atmospheric transmittance above aerosol layer [-] 8 (1-Ac) lognormal [-0.94160854 0.09531018] Fraction of earth not covered by cloud [-] 9 (1-Rs)^2 lognormal [-0.32850407 0.18232156] Square of surface coalbedo [-]
A full UQ framework (like UQLab, UQPy, etc.) typically has extensive probabilistic input modeling capabilities so this particular feature of the package may not be necessary. However, some smaller-in-scope UQ tools (say a metamodeling tool) often only provide the sample points—at which a model is evaluated—in a hypercube, most probably, a unit hypercube [0, 1]^M. Full probabilistic input modeling capabilities may be outside of the scope of such tools. During their development and testing phase, it would be nice to be able to carry out the transformation of the input points conveniently, at least to run the test functions.
That’s UQTestFuns at a glance, and honestly, there’s really not much else to it. Now on to the why…
Why use test functions
New methods for each of the UQ analyses (metamodeling, sensitivity analysis, etc.) are continuously being developed. Although such methods are eventually aimed at solving real-world problems—typically involving a complex expensive-to-evaluate computational model—, during the development phase, developers may prefer to use test functions for validation and benchmarking purposes. This is because:
- test functions are fast to evaluate, at least, faster than the real ones;
- there are many of them available in the literature for various types of analyses;
- while a UQ analysis usually takes the computational model of interest as a blackbox, test functions are known (and for some, the results are also analytically known) such that developers can do a thorough diagnosis of their methods based on some particular structures of the function.
NOTE: Assuming that real models are expensive to evaluate, the cost of analysis is typically measured in the number of function evaluations to achieve the desired accuracy.
Where to get test functions
Several online resources provide a wide range of test functions relevant to the applied UQ community.
For example (not an exhaustive list):
- The Virtual Library of Simulation Experiments: Test Functions and Datasets is, we believe, the definitive repository for UQ test functions. The site provides about a hundred test functions for a wide range of applications; each test function is described in a dedicated page complemented with implementations in MATLAB and R.
- The Benchmark proposals of GdR provide a series of documents that contain test function specifications, but no code implementation whatsoever.
- The Benchmark page of UQWorld provides a selection of test functions in metamodeling, sensitivity analysis, and reliability analysis along with their implementation in MATLAB.
- RPrepo: a reliability problems repository contains numerous reliability analysis test functions implemented Python. It was used as the basis for the Black-box reliability challenge 2019 (here is a post about that); if I recall correctly, the Chair of RSUQ was a successful participant of the challenge.
Common to all these online resources are the requirement to either:
- implement the test function oneself following the specification, or
- when available, download each of the test functions separately (except, perhaps, RPrepo).
Both are neither time-efficient nor convenient.
Alternatively, in a given programming language, some packages are often shipped with a selection of test functions either for illustration, validation, or benchmarking. Let’s consider the Python applied UQ community, here are some examples (the data below is as of 2023-02-28; once again the list is not exhaustive):
- UncertainPy includes 8 test functions (mainly in the context of neuroscience) for illustrating the package capabilities.
- PyApprox includes 18 test functions, including some non-algebraic functions for benchmarking purposes .
- Surrogate Modeling Toolbox (SMT) includes 11 analytical and engineering problems for benchmarking purposes.
- OpenTURNS  has its own separate benchmark package called OTBenchmark that includes a whopping 37 test functions .
Taken together, and still considering some overlaps, all these open-source packages provide quite a lot of test functions already implemented in Python. The problem is that these functions are part of the respective package. To get access to the test functions belonging to a package, the whole analysis package must be installed first.
Moreover, test functions from a given package are often implemented in such a way that is tightly coupled with the package itself. To use the test functions belonging to a package, you may need to learn some basic usage and (very specific) terminologies of the package.
If you are developing a new UQ method and would like to run it against some test functions, going through all of these packages just to get implementations of (say, analytical) functions sounds like a hassle. You might end up implementing some selection of the test functions yourself and eventually ship them together with your package, thus perpetuating the problem we have at hand.
Why another package
While there seems to be a problem with conveniently obtaining UQ test functions implemented in Python,
a healthy dose of skepticism is, well, healthy; so it’s natural to ask do we need another package for that?
As exemplified in the list above, some test functions are already implemented in Python and delivered as part of a UQ analysis package. One is even a dedicated benchmark package. There were some attempts.
And yet, we think none of them is a satisfactory way to obtain test functions. Specifically, none of them simultaneously provides:
- a lightweight implementation (with minimal dependencies) of many test functions available in the UQ literature; this means our package will be free of any implementations of any UQ analysis methods resulting in a minimal overhead in setting up the test functions,
- a single entry point (combining models and input specification) to a wide range of test functions,
- an opportunity for an open-source contribution where new test functions are proposed/added and new reference results are proposed/posted.
Satisfying all the above requirements is exactly the goal of the UQTestFuns package.
An homage to VLSE
UQTestFuns is an homage to the Virtual Library of Simulation Experiments: Test Functions and Datasets (VLSE).
For many years this site has been very useful in providing the uncertainty quantification (UQ) community with test functions (and datasets) from the literature. It is very well organized, describing each of the test functions briefly but clearly, and making proper citations to the sources of the test functions. It even includes downloadable implementations both in MATLAB and R.
But it has been a long time since the site was updated, and it is not very clear how other people can contribute to the site, say, to add a new test function or an implementation in languages other than MATLAB and R.
For instance, we are currently developing a software package written in Python that is hopefully relevant to the UQ community. For testing, we find ourselves reimplementing the test functions and the corresponding input specification in Python.
Instead of doing the same thing, we opted to distribute the collection of the test functions we have implemented so far as a Python package with a consistent interface. The package can then be installed and imported as needed by other packages (including our own) for testing and benchmarking purposes.
The package documentation, available online, serves as a library of UQ test functions similar to that of VLSE (but with implementations in Python). While it may not always be possible, we also try to include the relevant reference values for common UQ analyses (e.g., metamodeling, sensitivity analysis, moments estimation) associated with the test functions available in the literature.
As an open-source project, UQTestFuns tries to provide clear guidance for all on how to contribute to it, fixing broken things, and adding new stuff—be it a new test function, new reference values, or a better description.
In the near future, we are still going to focus on adding additional test functions (both analytical and numerical/non-analytical ones) and metafunctions along with available published results to UQTestFuns; any contributions are welcomed! It is, after all, supposed to be a library of UQ test functions.
One of the project’s long-term goals, however, is to make UQTestFuns a benchmark suite, or at least a part of it. This, we believe, is a much more ambitious goal and thus entails more work. Then, the package itself has to include reference results (analytical or otherwise) of a UQ particular analysis associated with the test functions, and not just be available statically in the docs. The number of function evaluations must also be tracked. This way, when a particular method is run against the package, the results of an analysis can be immediately verified and compared with the published results. Organizing such data (storage and runtime-access) might require a database implementation.
Furthermore, compiling results from proposed methods in the literature in such a way that they can be compared with each other is a non-trivial task. Simulation experiments in the literature may lack some details; the performance metrics used may not be common; and, especially in the older literature, may not include reproducible analysis codes.
These are all daunting tasks, so we think it’s a good idea to set the bar low enough and focus on adding additional test functions for the time being; they are the building blocks for a more advanced package.
That’s all! I think I might have been rambling about test functions. Thanks a lot for reading!
Simen Tennøe, Geir Halnes, and Gaute T. Einevoll. UncertainPy: a Python toolbox for uncertainty quantification and sensitivity analysis in computational neuroscience. Frontiers in Neuroinformatics, 2018. doi:10.3389/fninf.2018.00049 ↩︎
J. D. Jakeman. PyApprox: enabling efficient model analysis. Technical Report SAND2022-10458, Sandia National Laboratory, Albuquerque, New Mexico, 2021. URL: PyApprox: Enabling efficient model analysis. (Technical Report) | OSTI.GOV ↩︎
Mohamed Amine Bouhlel, John T. Hwang, Nathalie Bartoli, Rémi Lafage, Joseph Morlier, and Joaquim R. R. A. Martins. A Python surrogate modeling framework with derivatives. Advances in Engineering Software, 135:102662, 2019. doi:10.1016/j.advengsoft.2019.03.005 ↩︎
Michaël Baudin, Anne Dutfoy, Bertrand Iooss, and Anne-Laure Popelin. OpenTURNS: an industrial software for uncertainty quantification in simulation. In Handbook of Uncertainty Quantification, pages 2001–2038. Springer International Publishing, 2017. doi:10.1007/978-3-319-12385-1_64 ↩︎
Elias Fekhari, Michaël Baudin, Vincent Chabridon, and Youssef Jebroun. OTBenchmark: an open source Python package for benchmarking and validating uncertainty quantification algorithms. In The 4th ECCOMAS Thematic Conference on Uncertainty Quantification in Computational Sciences and Engineering (UNCECOMP 2021). Eccomas Proceedia, 2021. doi:10.7712/120221.8034.19093 ↩︎