A Fully Bayesian Gradient-Free Supervised Dimension Reduction Method
using Gaussian Processes
- URL: http://arxiv.org/abs/2008.03534v2
- Date: Thu, 1 Jul 2021 14:59:07 GMT
- Title: A Fully Bayesian Gradient-Free Supervised Dimension Reduction Method
using Gaussian Processes
- Authors: Raphael Gautier, Piyush Pandita, Sayan Ghosh, Dimitri Mavris
- Abstract summary: The proposed methodology is gradient-free and fully Bayesian, as it quantifies uncertainty in both the low-dimensional subspace and the surrogate model parameters.
It is validated on multiple datasets from engineering and science and compared to two other state-of-the-art methods.
- Score: 3.2636291418858474
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern day engineering problems are ubiquitously characterized by
sophisticated computer codes that map parameters or inputs to an underlying
physical process. In other situations, experimental setups are used to model
the physical process in a laboratory, ensuring high precision while being
costly in materials and logistics. In both scenarios, only limited amount of
data can be generated by querying the expensive information source at a finite
number of inputs or designs. This problem is compounded further in the presence
of a high-dimensional input space. State-of-the-art parameter space dimension
reduction methods, such as active subspace, aim to identify a subspace of the
original input space that is sufficient to explain the output response. These
methods are restricted by their reliance on gradient evaluations or copious
data, making them inadequate to expensive problems without direct access to
gradients. The proposed methodology is gradient-free and fully Bayesian, as it
quantifies uncertainty in both the low-dimensional subspace and the surrogate
model parameters. This enables a full quantification of epistemic uncertainty
and robustness to limited data availability. It is validated on multiple
datasets from engineering and science and compared to two other
state-of-the-art methods based on four aspects: a) recovery of the active
subspace, b) deterministic prediction accuracy, c) probabilistic prediction
accuracy, and d) training time. The comparison shows that the proposed method
improves the active subspace recovery and predictive accuracy, in both the
deterministic and probabilistic sense, when only few model observations are
available for training, at the cost of increased training time.
Related papers
- Dimensionality reduction can be used as a surrogate model for
high-dimensional forward uncertainty quantification [3.218294891039672]
We introduce a method to construct a surrogate model from the results of dimensionality reduction in uncertainty quantification.
The proposed approach differs from a sequential application of dimensionality reduction followed by surrogate modeling.
The proposed method is demonstrated through two uncertainty quantification problems characterized by high-dimensional input uncertainties.
arXiv Detail & Related papers (2024-02-07T04:47:19Z) - Large-Scale OD Matrix Estimation with A Deep Learning Method [70.78575952309023]
The proposed method integrates deep learning and numerical optimization algorithms to infer matrix structure and guide numerical optimization.
We conducted tests to demonstrate the good generalization performance of our method on a large-scale synthetic dataset.
arXiv Detail & Related papers (2023-10-09T14:30:06Z) - A Metaheuristic for Amortized Search in High-Dimensional Parameter
Spaces [0.0]
We propose a new metaheuristic that drives dimensionality reductions from feature-informed transformations.
DR-FFIT implements an efficient sampling strategy that facilitates a gradient-free parameter search in high-dimensional spaces.
Our test data show that DR-FFIT boosts the performances of random-search and simulated-annealing against well-established metaheuristics.
arXiv Detail & Related papers (2023-09-28T14:25:14Z) - Score-based Diffusion Models in Function Space [140.792362459734]
Diffusion models have recently emerged as a powerful framework for generative modeling.
We introduce a mathematically rigorous framework called Denoising Diffusion Operators (DDOs) for training diffusion models in function space.
We show that the corresponding discretized algorithm generates accurate samples at a fixed cost independent of the data resolution.
arXiv Detail & Related papers (2023-02-14T23:50:53Z) - Monte Carlo Neural PDE Solver for Learning PDEs via Probabilistic Representation [59.45669299295436]
We propose a Monte Carlo PDE solver for training unsupervised neural solvers.
We use the PDEs' probabilistic representation, which regards macroscopic phenomena as ensembles of random particles.
Our experiments on convection-diffusion, Allen-Cahn, and Navier-Stokes equations demonstrate significant improvements in accuracy and efficiency.
arXiv Detail & Related papers (2023-02-10T08:05:19Z) - Extension of Dynamic Mode Decomposition for dynamic systems with
incomplete information based on t-model of optimal prediction [69.81996031777717]
The Dynamic Mode Decomposition has proved to be a very efficient technique to study dynamic data.
The application of this approach becomes problematic if the available data is incomplete because some dimensions of smaller scale either missing or unmeasured.
We consider a first-order approximation of the Mori-Zwanzig decomposition, state the corresponding optimization problem and solve it with the gradient-based optimization method.
arXiv Detail & Related papers (2022-02-23T11:23:59Z) - A survey of unsupervised learning methods for high-dimensional
uncertainty quantification in black-box-type problems [0.0]
We construct surrogate models for quantification uncertainty (UQ) on complex partial differential equations (PPDEs)
The curse of dimensionality can be a pre-dimensional subspace used with suitable unsupervised learning techniques.
We demonstrate both the advantages and limitations of a suitable m-PCE model and we conclude that a suitable m-PCE model provides a cost-effective approach to deep subspaces.
arXiv Detail & Related papers (2022-02-09T16:33:40Z) - Manifold learning-based polynomial chaos expansions for high-dimensional
surrogate models [0.0]
We introduce a manifold learning-based method for uncertainty quantification (UQ) in describing systems.
The proposed method is able to achieve highly accurate approximations which ultimately lead to the significant acceleration of UQ tasks.
arXiv Detail & Related papers (2021-07-21T00:24:15Z) - Learning Functional Priors and Posteriors from Data and Physics [3.537267195871802]
We develop a new framework based on deep neural networks to be able to extrapolate in space-time using historical data.
We employ the physics-informed Generative Adversarial Networks (PI-GAN) to learn a functional prior.
At the second stage, we employ the Hamiltonian Monte Carlo (HMC) method to estimate the posterior in the latent space of PI-GANs.
arXiv Detail & Related papers (2021-06-08T03:03:24Z) - Scalable Marginal Likelihood Estimation for Model Selection in Deep
Learning [78.83598532168256]
Marginal-likelihood based model-selection is rarely used in deep learning due to estimation difficulties.
Our work shows that marginal likelihoods can improve generalization and be useful when validation data is unavailable.
arXiv Detail & Related papers (2021-04-11T09:50:24Z) - Evidential Sparsification of Multimodal Latent Spaces in Conditional
Variational Autoencoders [63.46738617561255]
We consider the problem of sparsifying the discrete latent space of a trained conditional variational autoencoder.
We use evidential theory to identify the latent classes that receive direct evidence from a particular input condition and filter out those that do not.
Experiments on diverse tasks, such as image generation and human behavior prediction, demonstrate the effectiveness of our proposed technique.
arXiv Detail & Related papers (2020-10-19T01:27:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.