Generative models and Bayesian inversion using Laplace approximation
- URL: http://arxiv.org/abs/2203.07755v1
- Date: Tue, 15 Mar 2022 10:05:43 GMT
- Title: Generative models and Bayesian inversion using Laplace approximation
- Authors: Manuel Marschall, Gerd W\"ubbeler, Franko Schm\"ahling, Clemens Elster
- Abstract summary: Recently, inverse problems were solved using generative models as highly informative priors.
We show that derived Bayes estimates are consistent, in contrast to the approach employing the low-dimensional manifold of the generative model.
- Score: 0.3670422696827525
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: The Bayesian approach to solving inverse problems relies on the choice of a
prior. This critical ingredient allows the formulation of expert knowledge or
physical constraints in a probabilistic fashion and plays an important role for
the success of the inference. Recently, Bayesian inverse problems were solved
using generative models as highly informative priors. Generative models are a
popular tool in machine learning to generate data whose properties closely
resemble those of a given database. Typically, the generated distribution of
data is embedded in a low-dimensional manifold. For the inverse problem, a
generative model is trained on a database that reflects the properties of the
sought solution, such as typical structures of the tissue in the human brain in
magnetic resonance (MR) imaging. The inference is carried out in the
low-dimensional manifold determined by the generative model which strongly
reduces the dimensionality of the inverse problem. However, this proceeding
produces a posterior that admits no Lebesgue density in the actual variables
and the accuracy reached can strongly depend on the quality of the generative
model. For linear Gaussian models we explore an alternative Bayesian inference
based on probabilistic generative models which is carried out in the original
high-dimensional space. A Laplace approximation is employed to analytically
derive the required prior probability density function induced by the
generative model. Properties of the resulting inference are investigated.
Specifically, we show that derived Bayes estimates are consistent, in contrast
to the approach employing the low-dimensional manifold of the generative model.
The MNIST data set is used to construct numerical experiments which confirm our
theoretical findings.
Related papers
- Inflationary Flows: Calibrated Bayesian Inference with Diffusion-Based Models [0.0]
We show how diffusion-based models can be repurposed for performing principled, identifiable Bayesian inference.
We show how such maps can be learned via standard DBM training using a novel noise schedule.
The result is a class of highly expressive generative models, uniquely defined on a low-dimensional latent space.
arXiv Detail & Related papers (2024-07-11T19:58:19Z) - Latent diffusion models for parameterization and data assimilation of facies-based geomodels [0.0]
Diffusion models are trained to generate new geological realizations from input fields characterized by random noise.
Latent diffusion models are shown to provide realizations that are visually consistent with samples from geomodeling software.
arXiv Detail & Related papers (2024-06-21T01:32:03Z) - Diffusion posterior sampling for simulation-based inference in tall data settings [53.17563688225137]
Simulation-based inference ( SBI) is capable of approximating the posterior distribution that relates input parameters to a given observation.
In this work, we consider a tall data extension in which multiple observations are available to better infer the parameters of the model.
We compare our method to recently proposed competing approaches on various numerical experiments and demonstrate its superiority in terms of numerical stability and computational cost.
arXiv Detail & Related papers (2024-04-11T09:23:36Z) - Probabilistic Unrolling: Scalable, Inverse-Free Maximum Likelihood
Estimation for Latent Gaussian Models [69.22568644711113]
We introduce probabilistic unrolling, a method that combines Monte Carlo sampling with iterative linear solvers to circumvent matrix inversions.
Our theoretical analyses reveal that unrolling and backpropagation through the iterations of the solver can accelerate gradient estimation for maximum likelihood estimation.
In experiments on simulated and real data, we demonstrate that probabilistic unrolling learns latent Gaussian models up to an order of magnitude faster than gradient EM, with minimal losses in model performance.
arXiv Detail & Related papers (2023-06-05T21:08:34Z) - Score-based Diffusion Models in Function Space [140.792362459734]
Diffusion models have recently emerged as a powerful framework for generative modeling.
We introduce a mathematically rigorous framework called Denoising Diffusion Operators (DDOs) for training diffusion models in function space.
We show that the corresponding discretized algorithm generates accurate samples at a fixed cost independent of the data resolution.
arXiv Detail & Related papers (2023-02-14T23:50:53Z) - Inverting brain grey matter models with likelihood-free inference: a
tool for trustable cytoarchitecture measurements [62.997667081978825]
characterisation of the brain grey matter cytoarchitecture with quantitative sensitivity to soma density and volume remains an unsolved challenge in dMRI.
We propose a new forward model, specifically a new system of equations, requiring a few relatively sparse b-shells.
We then apply modern tools from Bayesian analysis known as likelihood-free inference (LFI) to invert our proposed model.
arXiv Detail & Related papers (2021-11-15T09:08:27Z) - Data-driven Uncertainty Quantification in Computational Human Head
Models [0.6745502291821954]
Modern biofidelic head model simulations are associated with very high computational cost and high-dimensional inputs and outputs.
In this study, a two-stage, data-driven manifold learning-based framework is proposed for uncertainty quantification (UQ) of computational head models.
It is demonstrated that the surrogate models provide highly accurate approximations of the computational model while significantly reducing the computational cost.
arXiv Detail & Related papers (2021-10-29T05:42:31Z) - Latent Gaussian Model Boosting [0.0]
Tree-boosting shows excellent predictive accuracy on many data sets.
We obtain increased predictive accuracy compared to existing approaches in both simulated and real-world data experiments.
arXiv Detail & Related papers (2021-05-19T07:36:30Z) - Leveraging Global Parameters for Flow-based Neural Posterior Estimation [90.21090932619695]
Inferring the parameters of a model based on experimental observations is central to the scientific method.
A particularly challenging setting is when the model is strongly indeterminate, i.e., when distinct sets of parameters yield identical observations.
We present a method for cracking such indeterminacy by exploiting additional information conveyed by an auxiliary set of observations sharing global parameters.
arXiv Detail & Related papers (2021-02-12T12:23:13Z) - Goal-directed Generation of Discrete Structures with Conditional
Generative Models [85.51463588099556]
We introduce a novel approach to directly optimize a reinforcement learning objective, maximizing an expected reward.
We test our methodology on two tasks: generating molecules with user-defined properties and identifying short python expressions which evaluate to a given target value.
arXiv Detail & Related papers (2020-10-05T20:03:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.