Upcoming Major Release 1.4.0, and some update issues

Hi all,
it’s been a while with all the “crazy” brought on by 2020, but here we are ready to release a new major version of UQLab, 1.4.0. The official release date will be next Monday, February 1st 2021.

Due to circumstances outside of our control, however, there may be some complications involved in the update process for some of our users that rely on Windows 10.
In the following, I’d like to provide you with a short recap in the form of “the good, the bad and the ugly”, so that you know what to expect from next week’s update.

The good

This release comes with a lot of new features, improvements, bugfixes, and all these goodies. You can see details directly on the UQLab website (www.uqlab.com/release-notes), as well as on the dedicated post here on UQWorld.
These include (but are not limited to):

  • A new high-performance-computing dispatcher module that allows one to submit, monitor, retrieve and manipulate distributed computing jobs from the comfort of the Matlab command line. This includes UQLab and Matlab functions, but also other types of jobs that don’t rely on nor require Matlab on the remote computing nodes. Thanks to @damarginal for the crazy powerful and flexible tool!
  • A brand new active-learning-reliability (ALR) tool that allows you to create custom active-learning reliability analysis methods by mixing surrogate models, reliability methods, and learning functions. It even comes with its own user manual :wink: . Brought to you by @moustapha, this was the tool used in the TNO Benchmark (link to the post here).
  • Several improvements in the PCE module, including two new powerful solvers: subspace pursuit (SP) and Bayesian compressive sensing (BCS). The two solvers were developed by @nluethen , following her recent extensive benchmark studies here and here.

The bad
Due to a strange bug in the Matlab-Windows interaction that popped up following a Windows 10 update in late 2019 (earliest mention I could find is this, some Matlab installations (prior to R2020a) may incur in strange errors while copying files across folders on up-to-date Windows 10 computers.
As you can imagine, copying files across folders is an integral part of the UQLab auto-updater tool, that automatically monitors for the latest release and prompts the user to update UQLab.

The ugly
Unfortunately, because this issue was not known at the time of the previous release, the current version of the updater will suffer from this bug, and make auto-update impossible on several systems that present the following setup:

  • Matlab R2019b or older
  • Windows 10 reasonably up-to-date (it is impossible for us to pinpoint the exact update that caused this problem)

By far the best solution for those of who have this configuration, is to simply skip the auto-update process, and reinstall UQLab from scratch by following the installation instructions here.
All users with an active UQLab license will also receive an email linking to this post, and a reminder on the installation process.
Of course, everyone who has a different configuration should not be affected by this bug.

I apologize for the inconvenience, but still hope you’ll have as much fun with Rel1.4, as we had in developing it!

'til the next time,
Stefano

4 Likes

It’s exciting to hear the news that UQLAB will introduce “active Learning” tool :grinning:

Dear Stefano,

My primary interest in using UQLAB is actively learning complicated functions, such as using surrogate model to approximate the expensive FEM simulations.

So, I wonder if the newly updated ALR tool can be used for adaptive updating my intersted surrogate model. When hearing Version 1.4.0 will update active learning tool (ALR), I was very excited. However, after looking the newest manual, it seems ALR still only support relaibelity analysis, not support the general active learning of complicated functions. And ALR can only adaptively update the limit state function, by enriching the DOE point based only 4 specific learning function.

Is it possible for me to modify the code so as to enable a general active learning tool for surrogate modeling process? And if I attempt to modify the code, could you tell me which codes may be related? And do you have any advises for me?

Looking forward to hearing your response.

1 Like

Dear @GPLai,
thanks for your feedback.

Indeed the ALR module is part of the reliability tool (hence the “R” in its name), because in that field the final goal (accuracy measure) is very clear. We therefore feel comfortable distributing this kind of “scientifically mature” code.

As far as general function approximation is concerned, however, the topic is extremely broad, because different approximation goals can have very different optimal learning strategies (e.g. convergence in moments is very different than RMS convergence). Therefore, we don’t provide builtin algorithms (yet).

To construct your active learning algorithm, you can directly use UQLab without the need of modifying the code, as it is often done with an outer loop approach:

  1. Generate an initial experimental design.
  2. Train the surrogate.
  3. Maximize some learning function with the current surrogate in the input space to identify new ED points.
  4. Check convergence. If converged, exit otherwise continue.
  5. Evaluate the expensive model on the maxima of the learning function evaluated at point 3., and add them to the experimental design.
  6. Restart from 2.

All these steps can be done separately, without any need to modify UQLab, probably in less than 50 lines of code if a good learning function is available.

Now, the issue is that the learning function at 3. really depends on what is the goal of the surrogate, and that is where things get too long for a forum post.

Maybe @bsudret has some good suggestions on recent literature for general purpose adaptive surrogate modeling, but in the general sense optimality is problem dependent. There is a lot of literature on different adaptive surrogate models for sensitivity, Bayesian inversion, reliability analysis, etc… We ourselves worked on active learning for general purpose PCE here: On optimal experimental designs for sparse polynomial chaos expansions (full paper available here: https://epubs.siam.org/doi/abs/10.1137/16M1103488).

Hope this makes sense,
Stefano

Thanks @ste . Indeed, my purpose is using active learning for global fitting, and I want to figure out the behavior of a complex system in high-dimensional space. As the system is highly-nonlinear, and high dimensional, I think active learning may help to solve this problem.

Before I asked this question, I have defined several learning function for active learning with Kriging model in uqlab, such as EIGF, mainly using the variance of the targeted function that predicted by Kriging metamodel to guide the next sampling point. And in comparison, I have also used some space-filling sequentianl sampling strategy as well.

When using for noise-free model, it seems EIGF may work well, and the new sampling point tends to locate at the region with sparse smapling density which is similar to space-filling sampling method. However, when used for noise model, it seems the new sampling point likely tends to locate at the region have high sampling density already (clustering issue). This may be becauce that the Kriging model suggest this region has high uncertainty (variance) and therefore should locate more samples. However, this leads to low efficiency.

So indeed, I want to know how to solve this problem? And could you give me some advice?