Gaussian Process Subspace Regression for Model Reduction
- URL: http://arxiv.org/abs/2107.04668v1
- Date: Fri, 9 Jul 2021 20:41:23 GMT
- Title: Gaussian Process Subspace Regression for Model Reduction
- Authors: Ruda Zhang and Simon Mak and David Dunson
- Abstract summary: Subspace-valued functions arise in a wide range of problems, including parametric reduced order modeling (PROM)
In PROM, each parameter point can be associated with a subspace, which is used for Petrov-Galerkin projections of large system matrices.
We propose a novel Bayesian non model for subspace prediction: the Gaussian Process Subspace regression (GPS) model.
- Score: 7.41244589428771
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Subspace-valued functions arise in a wide range of problems, including
parametric reduced order modeling (PROM). In PROM, each parameter point can be
associated with a subspace, which is used for Petrov-Galerkin projections of
large system matrices. Previous efforts to approximate such functions use
interpolations on manifolds, which can be inaccurate and slow. To tackle this,
we propose a novel Bayesian nonparametric model for subspace prediction: the
Gaussian Process Subspace regression (GPS) model. This method is extrinsic and
intrinsic at the same time: with multivariate Gaussian distributions on the
Euclidean space, it induces a joint probability model on the Grassmann
manifold, the set of fixed-dimensional subspaces. The GPS adopts a simple yet
general correlation structure, and a principled approach for model selection.
Its predictive distribution admits an analytical form, which allows for
efficient subspace prediction over the parameter space. For PROM, the GPS
provides a probabilistic prediction at a new parameter point that retains the
accuracy of local reduced models, at a computational complexity that does not
depend on system dimension, and thus is suitable for online computation. We
give four numerical examples to compare our method to subspace interpolation,
as well as two methods that interpolate local reduced models. Overall, GPS is
the most data efficient, more computationally efficient than subspace
interpolation, and gives smooth predictions with uncertainty quantification.
Related papers
- A Statistical Machine Learning Approach for Adapting Reduced-Order Models using Projected Gaussian Process [4.658371840624581]
Proper Orthogonal Decomposition (POD) computes optimal basis modes that span a low-dimensional subspace where the Reduced-Order Models (ROMs) reside.
This paper proposes a Projected Gaussian Process (pGP) and formulates the problem of adapting POD basis as a supervised statistical learning problem.
Numerical examples are presented to demonstrate the advantages of the proposed pGP for adapting POD basis against parameter changes.
arXiv Detail & Related papers (2024-10-18T00:02:43Z) - von Mises Quasi-Processes for Bayesian Circular Regression [57.88921637944379]
We explore a family of expressive and interpretable distributions over circle-valued random functions.
The resulting probability model has connections with continuous spin models in statistical physics.
For posterior inference, we introduce a new Stratonovich-like augmentation that lends itself to fast Markov Chain Monte Carlo sampling.
arXiv Detail & Related papers (2024-06-19T01:57:21Z) - Distance and Collision Probability Estimation from Gaussian Surface Models [0.9208007322096533]
Continuous-space collision probability estimation is critical for uncertainty-aware motion planning.
Most collision detection and avoidance approaches assume the robot is modeled as a sphere, but ellipsoidal representations provide tighter approximations.
State-of-the-art methods derive the Euclidean distance and gradient by processing raw point clouds.
arXiv Detail & Related papers (2024-01-31T21:28:40Z) - GPS-Gaussian: Generalizable Pixel-wise 3D Gaussian Splatting for Real-time Human Novel View Synthesis [70.24111297192057]
We present a new approach, termed GPS-Gaussian, for synthesizing novel views of a character in a real-time manner.
The proposed method enables 2K-resolution rendering under a sparse-view camera setting.
arXiv Detail & Related papers (2023-12-04T18:59:55Z) - Subsurface Characterization using Ensemble-based Approaches with Deep
Generative Models [2.184775414778289]
Inverse modeling is limited for ill-posed, high-dimensional applications due to computational costs and poor prediction accuracy with sparse datasets.
We combine Wasserstein Geneversarative Adrial Network with Gradient Penalty (WGAN-GP) and Ensemble Smoother with Multiple Data Assimilation (ES-MDA)
WGAN-GP is trained to generate high-dimensional K fields from a low-dimensional latent space and ES-MDA updates the latent variables by assimilating available measurements.
arXiv Detail & Related papers (2023-10-02T01:27:10Z) - Probabilistic Unrolling: Scalable, Inverse-Free Maximum Likelihood
Estimation for Latent Gaussian Models [69.22568644711113]
We introduce probabilistic unrolling, a method that combines Monte Carlo sampling with iterative linear solvers to circumvent matrix inversions.
Our theoretical analyses reveal that unrolling and backpropagation through the iterations of the solver can accelerate gradient estimation for maximum likelihood estimation.
In experiments on simulated and real data, we demonstrate that probabilistic unrolling learns latent Gaussian models up to an order of magnitude faster than gradient EM, with minimal losses in model performance.
arXiv Detail & Related papers (2023-06-05T21:08:34Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - Gaussian process regression and conditional Karhunen-Lo\'{e}ve models
for data assimilation in inverse problems [68.8204255655161]
We present a model inversion algorithm, CKLEMAP, for data assimilation and parameter estimation in partial differential equation models.
The CKLEMAP method provides better scalability compared to the standard MAP method.
arXiv Detail & Related papers (2023-01-26T18:14:12Z) - Inverting brain grey matter models with likelihood-free inference: a
tool for trustable cytoarchitecture measurements [62.997667081978825]
characterisation of the brain grey matter cytoarchitecture with quantitative sensitivity to soma density and volume remains an unsolved challenge in dMRI.
We propose a new forward model, specifically a new system of equations, requiring a few relatively sparse b-shells.
We then apply modern tools from Bayesian analysis known as likelihood-free inference (LFI) to invert our proposed model.
arXiv Detail & Related papers (2021-11-15T09:08:27Z) - Fast covariance parameter estimation of spatial Gaussian process models
using neural networks [0.0]
We train NNs to take moderate size spatial fields or variograms as input and return the range and noise-to-signal covariance parameters.
Once trained, the NNs provide estimates with a similar accuracy compared to ML estimation and at a speedup by a factor of 100 or more.
This work can be easily extended to other, more complex, spatial problems and provides a proof-of-concept for this use of machine learning in computational statistics.
arXiv Detail & Related papers (2020-12-30T22:06:26Z) - Linear-time inference for Gaussian Processes on one dimension [17.77516394591124]
We investigate data sampled on one dimension for which state-space models are popular due to their linearly-scaling computational costs.
We provide the first general proof of conjecture that state-space models are general, able to approximate any one-dimensional Gaussian Processes.
We develop parallelized algorithms for performing inference and learning in the LEG model, test the algorithm on real and synthetic data, and demonstrate scaling to datasets with billions of samples.
arXiv Detail & Related papers (2020-03-11T23:20:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.