Notes from UNCECOMP 2019

eccomas_small1

A few weeks ago (24-26 June 2019), UNCECOMP 2019 took place on the island of Crete, Greece. With over 300 talks, this might be one of the largest conferences on uncertainty quantification (UQ). Seven members of our Chair (that’s almost everybody!) had a chance to join the conference and presented their work.

Here are some highlights and noteworthy talks from the conference according to three of our Ph.D. students! They are sorry if they somehow misconstrue some of these talks :slight_smile: .

On PCE and experimental designs

Our Ph.D. student, @nluethen, has been working on PCE and all the relevant ingredients to construct a sparse PCE metamodel. She followed some interesting talks on the subject, particularly the ones about sparsity and experimental designs. Below are her summaries of these talks.

Roger Ghanem and Iason Papaioannou talked about two different methodologies with a similar underlying idea: A new representation of the input random vector such that the resulting surrogate model depends on only a few of the new coordinates.

  • Ghanem’s approach is to identify a rotation of the input random vector to obtain new coordinates in which the non-zero coefficients are aligned with a small number of coordinates (Ghanem did emphasize, however, that this is not the same as sparsity). He referred to this rotation as a telescope into the data and presented three different methods to compute it. Some of the relevant papers related to this plenary talk can be found here[1],[2],[3].
  • Papaioannou’s approach[4], on the other hand, uses partial least squares (PLS) to identify latent structures that have high covariance with the model response. He presented a nonlinear extension of PLS for PCE and applied the method successfully to a 300-dimensional problem.

Zicheng Liu’s work on resampled PCE[5] (presented by his colleague, Soumaya Azzi) was also interesting. In this approach, the performance of sparse PCE given a rather small experimental design can be improved by recomputing the PCE several times using many different subsets of the data. In the end, the terms of the PCE that were selected most often (selected here means that the term had a nonzero coefficient) are considered most robust and are used in the final PCE computation.

Switching gears to experimental designs, in his semi-plenary talk, Michael Shields presented his group’s recent research on sampling for the purpose of surrogate modeling. He gave some insights about Latin hypercube sampling (LHS) that were new (@nluethen was pleasantly surprised with the theoretical basis for LHS). He then introduced a novel sampling method based on the generalization of LHS and stratified sampling providing a superior variance reduction. As an illustration, he presented the application of this sampling method to uncertainty analysis of an expensive ship hull strength simulation with 41 parameters[6].

On Bayesian model calibration

Another one of our Ph.D. students, @paulremo, attended talks related to his main research topic: Bayesian analysis for model calibration. UNCECOMP 19 had a mini-symposium on this topic organized by Iason Papaioannou, Daniel Straub, and Costas Papadimitriou, presenting a wide range of interesting topics in Bayesian computation, from practical applications to new methodologies. In particular:

  • Geert Lombaert gave a keynote presentation on the problem of optimal sensor placement for making predictions rather than parameter inference[7].
  • Felipe Uribe talked about an interesting approach to use the subset simulation method together with trans-dimensional MCMC samplers[8] to tackle inference problems of varying dimensionalities.

On machine learning

The last (but certainly not least) of our Ph.D. students in UNCECOMP, @Xujia, has been working on using statistical approaches to the surrogate modelling of stochastic simulators. He attended several interesting talks on machine learning and statistics. Here are his notes on some of the notable talks:

  • Christian Soize gave a plenary lecture on applying manifold learning to stochastic optimization problems[9]. This approach involves several advanced techniques to learn the data structure of the input variables and the output quantity of interest, namely: principal component analysis (PCA), diffusion maps, and MCMC. Of the three, diffusion maps[10] are relatively new in the UQ communities. Based on some analytical examples and case studies, the proposed method demonstrates high accuracy even for high dimensional problems.
  • José Nathan Kutz presented his and his colleagues’ work on data-driven model discovery resorting to deep neural networks, in particular, finding an appropriate coordinate to model the evolution of dynamic systems [11]. The architecture of the method is similar to autoencoders, but, instead of pure data compression, the latent variables of the autoencoder are considered as the new coordinate system. Furthermore, an extra step is added between the latent variables layer and the decoder. This extra step uses a governing equation (typically a partial differential equation or PDE) that describes the evolution of the latent variables. In the spirit of machine learning, the PDE is also learned from the data. This approach is remarkably close to the recent work of @c.lataniotis and @ste on combining machine learning compression techniques with surrogate models to tackle the high dimensionality [12].
  • George Karniadakis talked about physics-informed neural networks (PINN), applying neural networks to solving physical equations [13]. The main idea of this method is to combine the data-driven fit of deep neural networks with physics-based equations (say, as a set of PDEs). More precisely, the objective function (say, a mean squared error) to be minimized consists of two parts: first, the difference between the solution (either numerical or analytical) and its neural network approximation at the initial and boundary conditions; and second, the residue obtained by injecting the neural network approximation to the governing equation. The optimization can then be easily integrated within the neural networks optimization procedure (auto-differentiation). This method has also been extended to solving a differential equation with random coefficients [14].
  • Martin Eigel gave a keynote lecture on combining statistical learning theory with numerical analysis. This work aims at laying a theoretical foundation for solving a PDE with random parameters [15].

So we leave you with this enjoyable “further reading” list and if you have any question or thoughts about any of these topics, feel free to open a new topic in UQWorld and let’s discuss them!

IMG-20190629-WA0020
Good times.


Back to the Chair’s Blog index


  1. R. Tipireddy and R. Ghanem, “Basis adaptation in homogeneous chaos spaces,” Journal of Computational Physics, vol 259, pp. 304–317, 2014.DOI:10.1016/j.jcp.2013.12.009 ↩︎

  2. P. Tsilifis, et al. ,“Compressive sensing adaptation for polynomial chaos expansions,” Journal of Computational Physics, vol. 380, pp. 29–47, 2019.DOI:10.1016/j.jcp.2018.12.010 ↩︎

  3. P. Tsilifis and R. Ghanem, “Bayesian adaptation of chaos representations using variational inference and sampling on geodesics,” Proceedings of the Royal Society A, vol. 4774, no. 2217, 2018.DOI:10.1098/rspa.2018.0285 ↩︎

  4. I. Papaioannou, M. Ehre, and D. Straub, “PLS-based adaptation for efficient PCE representation in high-dimensions,” Journal of Computational Physics, vol. 387, pp. 186–204, 2019. DOI:10.1016/j.jcp.2019.02.046 ↩︎

  5. Z. Liu, D. Lesselier, B. Sudret, and J. Wiart, “Surrogate modeling based on resampled polynomial chaos expansions,” arXiv ↩︎

  6. M. Shields and J. Zhang, “The generalization of Latin hypercube sampling,” Reliability Engineering & System Safety, vol 148, pp. 96–108, 2016. DOI:10.1016/j.ress.2015.12.002 ↩︎

  7. C. Argyris, C. Papadimitriou, and G. Lombaert, “Optimal Sensor Placement for Response Predictions Using Local and Global Methods,” In Model Validation and Uncertainty Quantification, vol. 3, R. Barthorpe. Ed., Springer, 2020. DOI:10.1007/978-3-030-12075-7_26 ↩︎

  8. F. Uribe et al., “Bayesian inference with subset simulation in spaces of varying dimension,” URL ↩︎

  9. R. Ghanem and C. Soize, “Probabilistic nonconvex constrained optimization with fixed number of function evaluations,” International Journal of Numerical Methods in Engineering, vol. 113, no. 4, pp. 719–741, 2019. DOI:10.1002/nme.5632 ↩︎

  10. R. Coifman and S. Lafon, “Diffusion maps,” Applied and Computational Harmonic Analysis, vol. 21, no. 1, pp. 5–30, 2006. DOI:10.1016/j.acha.2006.04.006 ↩︎

  11. K. Champion, B. Lusch, J.N. Kutz, and S.L. Brunton, “Data-driven discovery of coordinates and governing equations,” arXiv. ↩︎

  12. C. Lataniotis, S. Marelli, B. Sudret, " Extending classical surrogate modelling to ultrahigh dimensional problems through supervised dimensionality reduction: a data-driven approach,"arXiv. ↩︎

  13. M. Raissi, P. Perdikaris, and G.E. Karniadakis, “Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations,” Journal of Computational Physics, vol. 378, pp. 686–707, 2019. DOI:10.1016/j.jcp.2018.10.045 ↩︎

  14. D. Zhang, L. Guo, and G.E. Karniadakis, “Learning in Modal Space: Solving Time-Dependent Stochastic PDEs Using Physics-Informed Neural Networks,” arXiv. ↩︎

  15. M. Eigel, R. Schneider, P. Trunschke, and S. Wolf, “Variational Monte Carlo–Bridging Concepts of Machine Learning and High Dimensional Partial Differential Equations,” arXiv. ↩︎

3 Likes