Weak neural variational inference for solving Bayesian inverse problems without forward models: applications in elastography
- URL: http://arxiv.org/abs/2407.20697v1
- Date: Tue, 30 Jul 2024 09:46:03 GMT
- Title: Weak neural variational inference for solving Bayesian inverse problems without forward models: applications in elastography
- Authors: Vincent C. Scholz, Yaohua Zang, Phaedon-Stelios Koutsourelakis,
- Abstract summary: We introduce a novel, data-driven approach for solving high-dimensional Bayesian inverse problems based on partial differential equations (PDEs)
The Weak Neural Variational Inference (WNVI) method complements real measurements with virtual observations derived from the physical model.
We demonstrate that WNVI is not only as accurate and more efficient than traditional methods that rely on repeatedly solving the (non-linear) forward problem as a black-box.
- Score: 1.6385815610837167
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we introduce a novel, data-driven approach for solving high-dimensional Bayesian inverse problems based on partial differential equations (PDEs), called Weak Neural Variational Inference (WNVI). The method complements real measurements with virtual observations derived from the physical model. In particular, weighted residuals are employed as probes to the governing PDE in order to formulate and solve a Bayesian inverse problem without ever formulating nor solving a forward model. The formulation treats the state variables of the physical model as latent variables, inferred using Stochastic Variational Inference (SVI), along with the usual unknowns. The approximate posterior employed uses neural networks to approximate the inverse mapping from state variables to the unknowns. We illustrate the proposed method in a biomedical setting where we infer spatially varying material properties from noisy tissue deformation data. We demonstrate that WNVI is not only as accurate and more efficient than traditional methods that rely on repeatedly solving the (non)linear forward problem as a black-box, but it can also handle ill-posed forward problems (e.g., with insufficient boundary conditions).
Related papers
- G2D2: Gradient-guided Discrete Diffusion for image inverse problem solving [55.185588994883226]
This paper presents a novel method for addressing linear inverse problems by leveraging image-generation models based on discrete diffusion as priors.
To the best of our knowledge, this is the first approach to use discrete diffusion model-based priors for solving image inverse problems.
arXiv Detail & Related papers (2024-10-09T06:18:25Z) - Total Uncertainty Quantification in Inverse PDE Solutions Obtained with Reduced-Order Deep Learning Surrogate Models [50.90868087591973]
We propose an approximate Bayesian method for quantifying the total uncertainty in inverse PDE solutions obtained with machine learning surrogate models.
We test the proposed framework by comparing it with the iterative ensemble smoother and deep ensembling methods for a non-linear diffusion equation.
arXiv Detail & Related papers (2024-08-20T19:06:02Z) - Stochastic full waveform inversion with deep generative prior for uncertainty quantification [0.0]
Full Waveform Inversion (FWI) involves solving a nonlinear and often non-unique inverse problem.
FWI presents challenges such as local minima trapping and inadequate handling of inherent uncertainties.
We propose leveraging deep generative models as the prior distribution of geophysical parameters for Bayesian inversion.
arXiv Detail & Related papers (2024-06-07T11:44:50Z) - A Variational Perspective on Solving Inverse Problems with Diffusion
Models [101.831766524264]
Inverse tasks can be formulated as inferring a posterior distribution over data.
This is however challenging in diffusion models since the nonlinear and iterative nature of the diffusion process renders the posterior intractable.
We propose a variational approach that by design seeks to approximate the true posterior distribution.
arXiv Detail & Related papers (2023-05-07T23:00:47Z) - Dynamical Hyperspectral Unmixing with Variational Recurrent Neural
Networks [25.051918587650636]
Multitemporal hyperspectral unmixing (MTHU) is a fundamental tool in the analysis of hyperspectral image sequences.
We propose an unsupervised MTHU algorithm based on variational recurrent neural networks.
arXiv Detail & Related papers (2023-03-19T04:51:34Z) - VI-DGP: A variational inference method with deep generative prior for
solving high-dimensional inverse problems [0.7734726150561089]
We propose a novel approximation method for estimating the high-dimensional posterior distribution.
This approach leverages a deep generative model to learn a prior model capable of generating spatially-varying parameters.
The proposed method can be fully implemented in an automatic differentiation manner.
arXiv Detail & Related papers (2023-02-22T06:48:10Z) - GibbsDDRM: A Partially Collapsed Gibbs Sampler for Solving Blind Inverse
Problems with Denoising Diffusion Restoration [64.8770356696056]
We propose GibbsDDRM, an extension of Denoising Diffusion Restoration Models (DDRM) to a blind setting in which the linear measurement operator is unknown.
The proposed method is problem-agnostic, meaning that a pre-trained diffusion model can be applied to various inverse problems without fine-tuning.
arXiv Detail & Related papers (2023-01-30T06:27:48Z) - Physics-informed Information Field Theory for Modeling Physical Systems with Uncertainty Quantification [0.0]
Information field theory (IFT) provides the tools necessary to perform statistics over fields that are not necessarily Gaussian.
We extend IFT to physics-informed IFT (PIFT) by encoding the functional priors with information about the physical laws which describe the field.
The posteriors derived from this PIFT remain independent of any numerical scheme and can capture multiple modes.
We numerically demonstrate that the method correctly identifies when the physics cannot be trusted, in which case it automatically treats learning the field as a regression problem.
arXiv Detail & Related papers (2023-01-18T15:40:19Z) - A Variational Inference Approach to Inverse Problems with Gamma
Hyperpriors [60.489902135153415]
This paper introduces a variational iterative alternating scheme for hierarchical inverse problems with gamma hyperpriors.
The proposed variational inference approach yields accurate reconstruction, provides meaningful uncertainty quantification, and is easy to implement.
arXiv Detail & Related papers (2021-11-26T06:33:29Z) - Inverting brain grey matter models with likelihood-free inference: a
tool for trustable cytoarchitecture measurements [62.997667081978825]
characterisation of the brain grey matter cytoarchitecture with quantitative sensitivity to soma density and volume remains an unsolved challenge in dMRI.
We propose a new forward model, specifically a new system of equations, requiring a few relatively sparse b-shells.
We then apply modern tools from Bayesian analysis known as likelihood-free inference (LFI) to invert our proposed model.
arXiv Detail & Related papers (2021-11-15T09:08:27Z) - Stochastic Variational Bayesian Inference for a Nonlinear Forward Model [2.578242050187029]
Variational Bayes (VB) has been used to facilitate the calculation of the posterior distribution in the context of nonlinear models from data.
Previously an analytical formulation of VB has been derived for nonlinear model inference on data with additive gaussian noise.
Here a solution is derived that avoids some of the approximations required of the analytical formulation.
arXiv Detail & Related papers (2020-07-03T13:30:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.