A few weeks ago (24-26 June 2019), UNCECOMP 2019 took place on the island of Crete, Greece. With over 300 talks, this might be one of the largest conferences on uncertainty quantification (UQ). Seven members of our Chair (that’s almost everybody!) had a chance to join the conference and presented their work.
- @bsudret gave a semi-plenary talk entitled Surrogate modeling meets machine learning.
- @ste presented the work of @c.lataniotis on high-dimensional data-driven uncertainty propagation and of F. Schmid on a new reliability-based sensitivity measure.
- @nluethen reported on her extensive literature survey and benchmark results on sparse polynomial chaos expansions.
- @paulremo talked about stochastic spectral embedding (SSE) for likelihood approximation.
- @xujia presented his approach to surrogate the response PDF of stochastic simulators.
- @torree covered copulas and dependence modeling for UQ.
- @moustapha spoke about his two-step approach to surrogate models with discontinuous outputs.
Here are some highlights and noteworthy talks from the conference according to three of our Ph.D. students! They are sorry if they somehow misconstrue some of these talks .
On PCE and experimental designs
Our Ph.D. student, @nluethen, has been working on PCE and all the relevant ingredients to construct a sparse PCE metamodel. She followed some interesting talks on the subject, particularly the ones about sparsity and experimental designs. Below are her summaries of these talks.
Roger Ghanem and Iason Papaioannou talked about two different methodologies with a similar underlying idea: A new representation of the input random vector such that the resulting surrogate model depends on only a few of the new coordinates.
- Ghanem’s approach is to identify a rotation of the input random vector to obtain new coordinates in which the non-zero coefficients are aligned with a small number of coordinates (Ghanem did emphasize, however, that this is not the same as sparsity). He referred to this rotation as a telescope into the data and presented three different methods to compute it. Some of the relevant papers related to this plenary talk can be found here,,.
- Papaioannou’s approach, on the other hand, uses partial least squares (PLS) to identify latent structures that have high covariance with the model response. He presented a nonlinear extension of PLS for PCE and applied the method successfully to a 300-dimensional problem.
Zicheng Liu’s work on resampled PCE (presented by his colleague, Soumaya Azzi) was also interesting. In this approach, the performance of sparse PCE given a rather small experimental design can be improved by recomputing the PCE several times using many different subsets of the data. In the end, the terms of the PCE that were selected most often (selected here means that the term had a nonzero coefficient) are considered most robust and are used in the final PCE computation.
Switching gears to experimental designs, in his semi-plenary talk, Michael Shields presented his group’s recent research on sampling for the purpose of surrogate modeling. He gave some insights about Latin hypercube sampling (LHS) that were new (@nluethen was pleasantly surprised with the theoretical basis for LHS). He then introduced a novel sampling method based on the generalization of LHS and stratified sampling providing a superior variance reduction. As an illustration, he presented the application of this sampling method to uncertainty analysis of an expensive ship hull strength simulation with 41 parameters.
On Bayesian model calibration
Another one of our Ph.D. students, @paulremo, attended talks related to his main research topic: Bayesian analysis for model calibration. UNCECOMP 19 had a mini-symposium on this topic organized by Iason Papaioannou, Daniel Straub, and Costas Papadimitriou, presenting a wide range of interesting topics in Bayesian computation, from practical applications to new methodologies. In particular:
- Geert Lombaert gave a keynote presentation on the problem of optimal sensor placement for making predictions rather than parameter inference.
- Felipe Uribe talked about an interesting approach to use the subset simulation method together with trans-dimensional MCMC samplers to tackle inference problems of varying dimensionalities.
On machine learning
The last (but certainly not least) of our Ph.D. students in UNCECOMP, @Xujia, has been working on using statistical approaches to the surrogate modelling of stochastic simulators. He attended several interesting talks on machine learning and statistics. Here are his notes on some of the notable talks:
- Christian Soize gave a plenary lecture on applying manifold learning to stochastic optimization problems. This approach involves several advanced techniques to learn the data structure of the input variables and the output quantity of interest, namely: principal component analysis (PCA), diffusion maps, and MCMC. Of the three, diffusion maps are relatively new in the UQ communities. Based on some analytical examples and case studies, the proposed method demonstrates high accuracy even for high dimensional problems.
- José Nathan Kutz presented his and his colleagues’ work on data-driven model discovery resorting to deep neural networks, in particular, finding an appropriate coordinate to model the evolution of dynamic systems . The architecture of the method is similar to autoencoders, but, instead of pure data compression, the latent variables of the autoencoder are considered as the new coordinate system. Furthermore, an extra step is added between the latent variables layer and the decoder. This extra step uses a governing equation (typically a partial differential equation or PDE) that describes the evolution of the latent variables. In the spirit of machine learning, the PDE is also learned from the data. This approach is remarkably close to the recent work of @c.lataniotis and @ste on combining machine learning compression techniques with surrogate models to tackle the high dimensionality .
- George Karniadakis talked about physics-informed neural networks (PINN), applying neural networks to solving physical equations . The main idea of this method is to combine the data-driven fit of deep neural networks with physics-based equations (say, as a set of PDEs). More precisely, the objective function (say, a mean squared error) to be minimized consists of two parts: first, the difference between the solution (either numerical or analytical) and its neural network approximation at the initial and boundary conditions; and second, the residue obtained by injecting the neural network approximation to the governing equation. The optimization can then be easily integrated within the neural networks optimization procedure (auto-differentiation). This method has also been extended to solving a differential equation with random coefficients .
- Martin Eigel gave a keynote lecture on combining statistical learning theory with numerical analysis. This work aims at laying a theoretical foundation for solving a PDE with random parameters .
So we leave you with this enjoyable “further reading” list and if you have any question or thoughts about any of these topics, feel free to open a new topic in UQWorld and let’s discuss them!
P. Tsilifis and R. Ghanem, “Bayesian adaptation of chaos representations using variational inference and sampling on geodesics,” Proceedings of the Royal Society A, vol. 4774, no. 2217, 2018.DOI:10.1098/rspa.2018.0285 ↩︎
I. Papaioannou, M. Ehre, and D. Straub, “PLS-based adaptation for efficient PCE representation in high-dimensions,” Journal of Computational Physics, vol. 387, pp. 186–204, 2019. DOI:10.1016/j.jcp.2019.02.046 ↩︎
C. Argyris, C. Papadimitriou, and G. Lombaert, “Optimal Sensor Placement for Response Predictions Using Local and Global Methods,” In Model Validation and Uncertainty Quantification, vol. 3, R. Barthorpe. Ed., Springer, 2020. DOI:10.1007/978-3-030-12075-7_26 ↩︎
R. Ghanem and C. Soize, “Probabilistic nonconvex constrained optimization with fixed number of function evaluations,” International Journal of Numerical Methods in Engineering, vol. 113, no. 4, pp. 719–741, 2019. DOI:10.1002/nme.5632 ↩︎
M. Raissi, P. Perdikaris, and G.E. Karniadakis, “Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations,” Journal of Computational Physics, vol. 378, pp. 686–707, 2019. DOI:10.1016/j.jcp.2018.10.045 ↩︎