Sampling for PCE Coefficient Calculation

Hello!

I’ve been working with PCE and created my own simple example in Python to compare it with UQ[py]Lab. However, I’ve noticed that my Python implementation significantly more samples to achieve accurate coefficients compared to UQLab.

Here’s a brief overview of my issue:

Sample Size Discrepancy: UQLab achieves accurate PCE coefficients with fewer samples, whereas my Python code needs a much larger sample size (e.g., N = 1000) for comparable accuracy.
Coefficient Calculation: The coefficients a0,a1,a2,a3 are computed using sample averages of Hermite polynomial evaluations.

Why does UQLab achieve accurate coefficients with fewer samples? Are there specific sampling or integration techniques in UQLab that might explain this?

How can I improve my Python code to reduce the sample size required for accurate coefficients?

Below is a reproducible example of my Python code:

> import numpy as np
> from scipy.stats import norm, qmc
> import matplotlib.pyplot as plt
> 
> 
> def Y(x):
>     return 5*x**3 - 3*x**2 + 2*x + 4
> 
> 
> def H0(x):
>     return 1
> 
> def H1(x):
>     return x
> 
> def H2(x):
>     return (x**2 - 1) / np.sqrt(2)
> 
> def H3(x):
>     return (x**3 - 3*x) / np.sqrt(6)
> 
>
> N = 1000  # Number of samples
> sampler = qmc.LatinHypercube(d=1)
> sample = sampler.random(n=N)
> x_samples_normal = norm.ppf(sample)  # Transform to standard normal
> 
> 
> Y_samples_normal = Y(x_samples_normal)
> H0_samples = H0(x_samples_normal)
> H1_samples = H1(x_samples_normal)
> H2_samples = H2(x_samples_normal)
> H3_samples = H3(x_samples_normal)
> 
> 
> a0 = np.mean(Y_samples_normal * H0_samples)
> a1 = np.mean(Y_samples_normal * H1_samples)
> a2 = np.mean(Y_samples_normal * H2_samples)
> a3 = np.mean(Y_samples_normal * H3_samples)
> 
> 
> print("a0 = {:.2f}, a1 = {:.2f}, a2 = {:.2f}, a3 = {:.2f}".format(a0, a1, a2, a3))
> 
>
> def Y_PCE_normal(x):
>     return a0*H0(x) + a1*H1(x) + a2*H2(x) + a3*H3(x)
> 
>
> x_values = np.linspace(-3, 3, 400)
> 
> 
> y_values = [Y(x) for x in x_values]
> y_pce_values_normal = [Y_PCE_normal(x) for x in x_values]
> 
> 
> plt.figure(figsize=(10, 6))
> plt.plot(x_values, y_values, label='Original Function')
> plt.plot(x_values, y_pce_values_normal, label='PCE Approximation (Normal Samples)', linestyle='dashed')
> plt.xlabel('X')
> plt.ylabel('Y')
> plt.title('Comparison of Original Function and PCE Approximation (Normal Samples)')
> plt.legend()
> plt.show()

Hi @Eilidh_Radcliff,

You are currently calculating the PCE coefficients using the expectation y_\alpha=\mathbb{E}[\Psi_\alpha(\boldsymbol{X})\mathcal{M}(\boldsymbol{X})] (refer to Eq. (1.25) in the UQLab user manual), which you are computing via Monte Carlo integration. While this method is valid, it is not the most efficient approach. For example, UQLab approximates this expectation using quadrature which significantly reduces the number of samples needed. You can see the impact this has in the example below.

However, it is unclear from your example what exactly you are doing with UQLab. By default, UQLab calculates the coefficients using regression or LARS to be more specific. To make a meaningful comparison between your implementation and UQLab’s, it is also essential to ensure both use the same data points.

Unless for educational purposes, I would generally not recommend implementing PCE from scratch. It may be better to rely on existing implementations that incorporate state-of-the-art techniques. They are often challenging to develop and optimize but also deliver in many cases significantly better results.

I hope this brings some clarity!

Best regards,
Styfen

Click me
import numpy as np
from numpy.polynomial.hermite import hermgauss
from scipy.stats import norm, qmc

def Y(x):
    return 5*x**3 - 3*x**2 + 2*x + 4

def H0(x):
    return 1

def H1(x):
    return x

def H2(x):
    return (x**2 - 1) / np.sqrt(2)

def H3(x):
    return (x**3 - 3*x) / np.sqrt(6)

# -------------------------------------------------------------------
# Use Gaussian quadrature 
# -------------------------------------------------------------------

N = 4  # Number of quadrature points
nodes, weights = hermgauss(N)

# Transform to standard normal
x_samples_normal = np.sqrt(2) * nodes
Y_samples_normal = Y(x_samples_normal)

H0_samples = H0(x_samples_normal)
H1_samples = H1(x_samples_normal)
H2_samples = H2(x_samples_normal)
H3_samples = H3(x_samples_normal)

# Compute coefficients using quadrature
a0 = np.sum(weights * Y_samples_normal * H0_samples) / np.sqrt(np.pi)
a1 = np.sum(weights * Y_samples_normal * H1_samples) / np.sqrt(np.pi)
a2 = np.sum(weights * Y_samples_normal * H2_samples) / np.sqrt(np.pi)
a3 = np.sum(weights * Y_samples_normal * H3_samples) / np.sqrt(np.pi)

print("a0 = {:.2f}, a1 = {:.2f}, a2 = {:.2f}, a3 = {:.2f}".format(a0, a1, a2, a3))

# -------------------------------------------------------------------
# Use MC integration
# -------------------------------------------------------------------

N = 1_000_000  # Number of samples
sampler = qmc.LatinHypercube(d=1)
sample = sampler.random(n=N)

x_samples_normal = norm.ppf(sample)  # Transform to standard normal
Y_samples_normal = Y(x_samples_normal)

H0_samples = H0(x_samples_normal)
H1_samples = H1(x_samples_normal)
H2_samples = H2(x_samples_normal)
H3_samples = H3(x_samples_normal)

a0 = np.mean(Y_samples_normal * H0_samples)
a1 = np.mean(Y_samples_normal * H1_samples)
a2 = np.mean(Y_samples_normal * H2_samples)
a3 = np.mean(Y_samples_normal * H3_samples)

print("a0 = {:.2f}, a1 = {:.2f}, a2 = {:.2f}, a3 = {:.2f}".format(a0, a1, a2, a3))

Thank you for your response!

I’d like to clarify my understanding of the projection and regression methods for calculating PCE coefficients.

I calculate the PCE coefficients using Monte Carlo integration, averaging the product of the model function and basis functions over many random samples. I understand this is a projection method. Additionally, using quadrature methods, like Gaussian quadrature, also fits into the projection method by using specific sample points and weights for more efficient integration.

From what I gather, UQLab uses Least Angle Regression (LARS) by default. LARS is a sparse-regression technique that reduces the number of coefficients to only those with non-zero entries, identifying the most significant ones.

Am I correct that both Monte Carlo integration and quadrature methods are examples of the projection method? When might one method (projection or regression) be preferred?

Yes, whatever solves the equation y_\alpha=\mathbb{E}[\Psi_\alpha(\boldsymbol{X})\mathcal{M}(\boldsymbol{X})] can be considered a projection.

I think your understanding of sparse regression is correct. However, I would like to emphasize that there are essentially two tasks that can sometimes be separated (e.g. in LARS). These are:

  • Identifying the main regressors, i.e. the indices of the non-zero coefficients.
  • Calculation of the corresponding coefficient values.

For example, UQLab uses a hybrid LARS approach. It uses the standard LARS to identify the regressors (i.e. which coefficients should be non-zero), but then calculates the least squares solution for the coefficients that were found to be important. This is because LARS is excellent at finding the non-zero coefficients, but the empirical R^2 can always be reduced by using the OLS solution of the non-zero coefficients.

It is not always clear when regression is preferable to projection and vice versa. However, there are cases where regression is certainly the only option, namely when you are working with an existing experimental design. Note that projection with quadrature requires specific data points!
Also, quadrature does not scale well with the problem dimensions (sparse quadrature can help to some extent).
Sparse regression also has the advantage that you can choose higher polynomial degrees and interactions without overfitting compared to regression using OLS.