The sparse Polynomial Chaos expansion: a fully Bayesian approach with
joint priors on the coefficients and global selection of terms
- URL: http://arxiv.org/abs/2204.06043v1
- Date: Tue, 12 Apr 2022 19:00:00 GMT
- Title: The sparse Polynomial Chaos expansion: a fully Bayesian approach with
joint priors on the coefficients and global selection of terms
- Authors: Paul-Christian B\"urkner, Ilja Kr\"oker, Sergey Oladyshkin, Wolfgang
Nowak
- Abstract summary: Polynomial chaos expansion (PCE) is a versatile tool widely used in uncertainty and machine learning.
It can overcome the curse of dimensionality very efficiently, but have to pay specific attention to their strategies of choosing training points.
In this study, we develop and evaluate a fully Bayesian approach to establish the PCE representation via joint shrinkage priors and Markov chain Monte Carlo.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Polynomial chaos expansion (PCE) is a versatile tool widely used in
uncertainty quantification and machine learning, but its successful application
depends strongly on the accuracy and reliability of the resulting PCE-based
response surface. High accuracy typically requires high polynomial degrees,
demanding many training points especially in high-dimensional problems through
the curse of dimensionality. So-called sparse PCE concepts work with a much
smaller selection of basis polynomials compared to conventional PCE approaches
and can overcome the curse of dimensionality very efficiently, but have to pay
specific attention to their strategies of choosing training points.
Furthermore, the approximation error resembles an uncertainty that most
existing PCE-based methods do not estimate. In this study, we develop and
evaluate a fully Bayesian approach to establish the PCE representation via
joint shrinkage priors and Markov chain Monte Carlo. The suggested Bayesian PCE
model directly aims to solve the two challenges named above: achieving a sparse
PCE representation and estimating uncertainty of the PCE itself. The embedded
Bayesian regularizing via the joint shrinkage prior allows using higher
polynomial degrees for given training points due to its ability to handle
underdetermined situations, where the number of considered PCE coefficients
could be much larger than the number of available training points. We also
explore multiple variable selection methods to construct sparse PCE expansions
based on the established Bayesian representations, while globally selecting the
most meaningful orthonormal polynomials given the available training data. We
demonstrate the advantages of our Bayesian PCE and the corresponding
sparsity-inducing methods on several benchmarks.
Related papers
- Deep Polynomial Chaos Expansion [5.6189692698829115]
Polynomial chaos expansion (PCE) is a classical and widely used surrogate modeling technique.<n>DeepPCE is a deep generalization of PCE that scales effectively to high-dimensional input spaces.
arXiv Detail & Related papers (2025-07-28T18:59:46Z) - Last-Iterate Global Convergence of Policy Gradients for Constrained Reinforcement Learning [62.81324245896717]
We introduce an exploration-agnostic algorithm, called C-PG, which exhibits global last-ite convergence guarantees under (weak) gradient domination assumptions.
We numerically validate our algorithms on constrained control problems, and compare them with state-of-the-art baselines.
arXiv Detail & Related papers (2024-07-15T14:54:57Z) - Offline Bayesian Aleatoric and Epistemic Uncertainty Quantification and Posterior Value Optimisation in Finite-State MDPs [3.1139806580181006]
We address the challenge of quantifying Bayesian uncertainty in offline use cases of finite-state Markov Decision Processes (MDPs) with unknown dynamics.
We use standard Bayesian reinforcement learning methods to capture the posterior uncertainty in MDP parameters.
We then analytically compute the first two moments of the return distribution across posterior samples and apply the law of total variance.
We highlight the real-world impact and computational scalability of our method by applying it to the AI Clinician problem.
arXiv Detail & Related papers (2024-06-04T16:21:14Z) - Likelihood Ratio Confidence Sets for Sequential Decision Making [51.66638486226482]
We revisit the likelihood-based inference principle and propose to use likelihood ratios to construct valid confidence sequences.
Our method is especially suitable for problems with well-specified likelihoods.
We show how to provably choose the best sequence of estimators and shed light on connections to online convex optimization.
arXiv Detail & Related papers (2023-11-08T00:10:21Z) - Provably Efficient UCB-type Algorithms For Learning Predictive State
Representations [55.00359893021461]
The sequential decision-making problem is statistically learnable if it admits a low-rank structure modeled by predictive state representations (PSRs)
This paper proposes the first known UCB-type approach for PSRs, featuring a novel bonus term that upper bounds the total variation distance between the estimated and true models.
In contrast to existing approaches for PSRs, our UCB-type algorithms enjoy computational tractability, last-iterate guaranteed near-optimal policy, and guaranteed model accuracy.
arXiv Detail & Related papers (2023-07-01T18:35:21Z) - Learning to Estimate Without Bias [57.82628598276623]
Gauss theorem states that the weighted least squares estimator is a linear minimum variance unbiased estimation (MVUE) in linear models.
In this paper, we take a first step towards extending this result to non linear settings via deep learning with bias constraints.
A second motivation to BCE is in applications where multiple estimates of the same unknown are averaged for improved performance.
arXiv Detail & Related papers (2021-10-24T10:23:51Z) - Mini-data-driven Deep Arbitrary Polynomial Chaos Expansion for
Uncertainty Quantification [9.586968666707529]
This paper proposes a deep arbitrary chaos expansion (Deep aPCE) method to improve the balance between surrogate model accuracy and training data cost.
Four numerical examples and an actual engineering problem are used to verify the effectiveness of the Deep aPCE method.
arXiv Detail & Related papers (2021-07-22T02:49:07Z) - Bayesian decision-making under misspecified priors with applications to
meta-learning [64.38020203019013]
Thompson sampling and other sequential decision-making algorithms are popular approaches to tackle explore/exploit trade-offs in contextual bandits.
We show that performance degrades gracefully with misspecified priors.
arXiv Detail & Related papers (2021-07-03T23:17:26Z) - An Offline Risk-aware Policy Selection Method for Bayesian Markov
Decision Processes [0.0]
Exploitation vs Caution (EvC) is a paradigm that elegantly incorporates model uncertainty abiding by the Bayesian formalism.
We validate EvC with state-of-the-art approaches in different discrete, yet simple, environments offering a fair variety of MDP classes.
In the tested scenarios EvC manages to select robust policies and hence stands out as a useful tool for practitioners.
arXiv Detail & Related papers (2021-05-27T20:12:20Z) - Automatic selection of basis-adaptive sparse polynomial chaos expansions
for engineering applications [0.0]
We describe three state-of-the-art basis-adaptive approaches for sparse chaos expansions.
We conduct an extensive benchmark in terms of global approximation accuracy on a large set of computational models.
We introduce a novel solver and basis adaptivity selection scheme guided by cross-validation error.
arXiv Detail & Related papers (2020-09-10T12:13:57Z) - Optimal Bayesian experimental design for subsurface flow problems [77.34726150561087]
We propose a novel approach for development of chaos expansion (PCE) surrogate model for the design utility function.
This novel technique enables the derivation of a reasonable quality response surface for the targeted objective function with a computational budget comparable to several single-point evaluations.
arXiv Detail & Related papers (2020-08-10T09:42:59Z) - Repulsive Mixture Models of Exponential Family PCA for Clustering [127.90219303669006]
The mixture extension of exponential family principal component analysis ( EPCA) was designed to encode much more structural information about data distribution than the traditional EPCA.
The traditional mixture of local EPCAs has the problem of model redundancy, i.e., overlaps among mixing components, which may cause ambiguity for data clustering.
In this paper, a repulsiveness-encouraging prior is introduced among mixing components and a diversified EPCA mixture (DEPCAM) model is developed in the Bayesian framework.
arXiv Detail & Related papers (2020-04-07T04:07:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.