Stochastic PDE representation of random fields for large-scale Gaussian
process regression and statistical finite element analysis
- URL: http://arxiv.org/abs/2305.13879v2
- Date: Tue, 5 Sep 2023 15:51:53 GMT
- Title: Stochastic PDE representation of random fields for large-scale Gaussian
process regression and statistical finite element analysis
- Authors: Kim Jie Koh and Fehmi Cirak
- Abstract summary: We develop a framework for large-scale statistical finite element analysis and Gaussian process (GP) regression on complex geometries.
We obtain the relevant prior probability densities with a sparse precision matrix.
The properties of the priors are governed by the parameters and possibly fractional order of the SPDE.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The efficient representation of random fields on geometrically complex
domains is crucial for Bayesian modelling in engineering and machine learning.
Today's prevalent random field representations are either intended for
unbounded domains or are too restrictive in terms of possible field properties.
Because of these limitations, techniques leveraging the historically
established link between stochastic PDEs (SPDEs) and random fields have been
gaining interest. The SPDE representation is especially appealing for
engineering applications which already have a finite element discretisation for
solving the physical conservation equations. In contrast to the dense
covariance matrix of a random field, its inverse, the precision matrix, is
usually sparse and equal to the stiffness matrix of an elliptic SPDE. We use
the SPDE representation to develop a scalable framework for large-scale
statistical finite element analysis and Gaussian process (GP) regression on
complex geometries. The statistical finite element method (statFEM) introduced
by Girolami et al. (2022) is a novel approach for synthesising measurement data
and finite element models. In both statFEM and GP regression, we use the SPDE
formulation to obtain the relevant prior probability densities with a sparse
precision matrix. The properties of the priors are governed by the parameters
and possibly fractional order of the SPDE so that we can model on bounded
domains and manifolds anisotropic, non-stationary random fields with arbitrary
smoothness. The observation models for statFEM and GP regression are such that
the posterior probability densities are Gaussians with a closed-form mean and
precision. The respective mean vector and precision matrix and can be evaluated
using only sparse matrix operations. We demonstrate the versatility of the
proposed framework and its convergence properties with Poisson and thin-shell
examples.
Related papers
- Scaling and renormalization in high-dimensional regression [72.59731158970894]
This paper presents a succinct derivation of the training and generalization performance of a variety of high-dimensional ridge regression models.
We provide an introduction and review of recent results on these topics, aimed at readers with backgrounds in physics and deep learning.
arXiv Detail & Related papers (2024-05-01T15:59:00Z) - Convex Parameter Estimation of Perturbed Multivariate Generalized
Gaussian Distributions [18.95928707619676]
We propose a convex formulation with well-established properties for MGGD parameters.
The proposed framework is flexible as it combines a variety of regularizations for the precision matrix, the mean and perturbations.
Experiments show a more accurate precision and covariance matrix estimation with similar performance for the mean vector parameter.
arXiv Detail & Related papers (2023-12-12T18:08:04Z) - Probabilistic Unrolling: Scalable, Inverse-Free Maximum Likelihood
Estimation for Latent Gaussian Models [69.22568644711113]
We introduce probabilistic unrolling, a method that combines Monte Carlo sampling with iterative linear solvers to circumvent matrix inversions.
Our theoretical analyses reveal that unrolling and backpropagation through the iterations of the solver can accelerate gradient estimation for maximum likelihood estimation.
In experiments on simulated and real data, we demonstrate that probabilistic unrolling learns latent Gaussian models up to an order of magnitude faster than gradient EM, with minimal losses in model performance.
arXiv Detail & Related papers (2023-06-05T21:08:34Z) - FaDIn: Fast Discretized Inference for Hawkes Processes with General
Parametric Kernels [82.53569355337586]
This work offers an efficient solution to temporal point processes inference using general parametric kernels with finite support.
The method's effectiveness is evaluated by modeling the occurrence of stimuli-induced patterns from brain signals recorded with magnetoencephalography (MEG)
Results show that the proposed approach leads to an improved estimation of pattern latency than the state-of-the-art.
arXiv Detail & Related papers (2022-10-10T12:35:02Z) - Strong posterior contraction rates via Wasserstein dynamics [8.479040075763892]
In Bayesian statistics, posterior contraction rates (PCRs) quantify the speed at which the posterior distribution concentrates on arbitrarily small neighborhoods of a true model.
We develop a new approach to PCRs, with respect to strong norm distances on parameter spaces of functions.
arXiv Detail & Related papers (2022-03-21T06:53:35Z) - When Random Tensors meet Random Matrices [50.568841545067144]
This paper studies asymmetric order-$d$ spiked tensor models with Gaussian noise.
We show that the analysis of the considered model boils down to the analysis of an equivalent spiked symmetric textitblock-wise random matrix.
arXiv Detail & Related papers (2021-12-23T04:05:01Z) - Inverting brain grey matter models with likelihood-free inference: a
tool for trustable cytoarchitecture measurements [62.997667081978825]
characterisation of the brain grey matter cytoarchitecture with quantitative sensitivity to soma density and volume remains an unsolved challenge in dMRI.
We propose a new forward model, specifically a new system of equations, requiring a few relatively sparse b-shells.
We then apply modern tools from Bayesian analysis known as likelihood-free inference (LFI) to invert our proposed model.
arXiv Detail & Related papers (2021-11-15T09:08:27Z) - Understanding Implicit Regularization in Over-Parameterized Single Index
Model [55.41685740015095]
We design regularization-free algorithms for the high-dimensional single index model.
We provide theoretical guarantees for the induced implicit regularization phenomenon.
arXiv Detail & Related papers (2020-07-16T13:27:47Z) - The Random Feature Model for Input-Output Maps between Banach Spaces [6.282068591820945]
The random feature model is a parametric approximation to kernel or regression methods.
We propose a methodology for use of the random feature model as a data-driven surrogate for operators that map an input Banach space to an output Banach space.
arXiv Detail & Related papers (2020-05-20T17:41:40Z) - Analysis of Bayesian Inference Algorithms by the Dynamical Functional
Approach [2.8021833233819486]
We analyze an algorithm for approximate inference with large Gaussian latent variable models in a student-trivial scenario.
For the case of perfect data-model matching, the knowledge of static order parameters derived from the replica method allows us to obtain efficient algorithmic updates.
arXiv Detail & Related papers (2020-01-14T17:22:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.