Statistical Finite Elements via Langevin Dynamics
- URL: http://arxiv.org/abs/2110.11131v1
- Date: Thu, 21 Oct 2021 13:30:41 GMT
- Title: Statistical Finite Elements via Langevin Dynamics
- Authors: \"Omer Deniz Akyildiz, Connor Duffin, Sotirios Sabanis, Mark Girolami
- Abstract summary: We make use of Langevin dynamics to solve the statFEM forward problem, studying the utility of the unadjusted Langevin algorithm (ULA)
ULA is a Metropolis-free Markov chain Monte Carlo sampler, to build a sample-based characterisation of this otherwise intractable measure.
We provide theoretical guarantees on sampler performance, demonstrating convergence, for both the prior and posterior, in the Kullback-Leibler divergence, and, in Wasserstein-2, with further results on the effect of preconditioning.
- Score: 0.8602553195689513
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The recent statistical finite element method (statFEM) provides a coherent
statistical framework to synthesise finite element models with observed data.
Through embedding uncertainty inside of the governing equations, finite element
solutions are updated to give a posterior distribution which quantifies all
sources of uncertainty associated with the model. However to incorporate all
sources of uncertainty, one must integrate over the uncertainty associated with
the model parameters, the known forward problem of uncertainty quantification.
In this paper, we make use of Langevin dynamics to solve the statFEM forward
problem, studying the utility of the unadjusted Langevin algorithm (ULA), a
Metropolis-free Markov chain Monte Carlo sampler, to build a sample-based
characterisation of this otherwise intractable measure. Due to the structure of
the statFEM problem, these methods are able to solve the forward problem
without explicit full PDE solves, requiring only sparse matrix-vector products.
ULA is also gradient-based, and hence provides a scalable approach up to high
degrees-of-freedom. Leveraging the theory behind Langevin-based samplers, we
provide theoretical guarantees on sampler performance, demonstrating
convergence, for both the prior and posterior, in the Kullback-Leibler
divergence, and, in Wasserstein-2, with further results on the effect of
preconditioning. Numerical experiments are also provided, for both the prior
and posterior, to demonstrate the efficacy of the sampler, with a Python
package also included.
Related papers
- Robust Bayesian Nonnegative Matrix Factorization with Implicit
Regularizers [4.913248451323163]
We introduce a probabilistic model with implicit norm regularization for learning nonnegative matrix factorization (NMF)
We evaluate the model on several real-world datasets including Genomics of Drug Sensitivity in Cancer.
arXiv Detail & Related papers (2022-08-22T04:34:17Z) - Probabilistic learning inference of boundary value problem with
uncertainties based on Kullback-Leibler divergence under implicit constraints [0.0]
We present a general methodology of a probabilistic learning inference that allows for estimating a posterior probability model for a boundary value problem from a prior probability model.
A statistical surrogate model of the implicit mapping, which represents the constraints, is introduced.
In a second part, an application is presented to illustrate the proposed theory and is also, as such, a contribution to the three-dimensional homogenization of heterogeneous linear elastic media.
arXiv Detail & Related papers (2022-02-10T16:00:10Z) - Low-rank statistical finite elements for scalable model-data synthesis [0.8602553195689513]
statFEM acknowledges a priori model misspecification, by embedding forcing within the governing equations.
The method reconstructs the observed data-generating processes with minimal loss of information.
This article overcomes this hurdle by embedding a low-rank approximation of the underlying dense covariance matrix.
arXiv Detail & Related papers (2021-09-10T09:51:43Z) - Heavy-tailed Streaming Statistical Estimation [58.70341336199497]
We consider the task of heavy-tailed statistical estimation given streaming $p$ samples.
We design a clipped gradient descent and provide an improved analysis under a more nuanced condition on the noise of gradients.
arXiv Detail & Related papers (2021-08-25T21:30:27Z) - Entropy Minimizing Matrix Factorization [102.26446204624885]
Nonnegative Matrix Factorization (NMF) is a widely-used data analysis technique, and has yielded impressive results in many real-world tasks.
In this study, an Entropy Minimizing Matrix Factorization framework (EMMF) is developed to tackle the above problem.
Considering that the outliers are usually much less than the normal samples, a new entropy loss function is established for matrix factorization.
arXiv Detail & Related papers (2021-03-24T21:08:43Z) - Leveraging Global Parameters for Flow-based Neural Posterior Estimation [90.21090932619695]
Inferring the parameters of a model based on experimental observations is central to the scientific method.
A particularly challenging setting is when the model is strongly indeterminate, i.e., when distinct sets of parameters yield identical observations.
We present a method for cracking such indeterminacy by exploiting additional information conveyed by an auxiliary set of observations sharing global parameters.
arXiv Detail & Related papers (2021-02-12T12:23:13Z) - Spectral Unmixing With Multinomial Mixture Kernel and Wasserstein
Generative Adversarial Loss [4.56877715768796]
This study proposes a novel framework for spectral unmixing by using 1D convolution kernels and spectral uncertainty.
High-level representations are computed from data, and they are further modeled with the Multinomial Mixture Model.
Experiments are performed on both real and synthetic datasets.
arXiv Detail & Related papers (2020-12-12T16:49:01Z) - Towards constraining warm dark matter with stellar streams through
neural simulation-based inference [7.608718235345664]
We introduce a likelihood-free Bayesian inference pipeline based on Amortised Approximate Likelihood Ratios (AALR)
We apply the method to the simplified case where stellar streams are only perturbed by dark matter subhaloes.
arXiv Detail & Related papers (2020-11-30T15:53:43Z) - Accounting for Unobserved Confounding in Domain Generalization [107.0464488046289]
This paper investigates the problem of learning robust, generalizable prediction models from a combination of datasets.
Part of the challenge of learning robust models lies in the influence of unobserved confounders.
We demonstrate the empirical performance of our approach on healthcare data from different modalities.
arXiv Detail & Related papers (2020-07-21T08:18:06Z) - Understanding Implicit Regularization in Over-Parameterized Single Index
Model [55.41685740015095]
We design regularization-free algorithms for the high-dimensional single index model.
We provide theoretical guarantees for the induced implicit regularization phenomenon.
arXiv Detail & Related papers (2020-07-16T13:27:47Z) - Slice Sampling for General Completely Random Measures [74.24975039689893]
We present a novel Markov chain Monte Carlo algorithm for posterior inference that adaptively sets the truncation level using auxiliary slice variables.
The efficacy of the proposed algorithm is evaluated on several popular nonparametric models.
arXiv Detail & Related papers (2020-06-24T17:53:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.