Error Analysis of Bayesian Inverse Problems with Generative Priors
- URL: http://arxiv.org/abs/2601.17374v1
- Date: Sat, 24 Jan 2026 08:45:27 GMT
- Title: Error Analysis of Bayesian Inverse Problems with Generative Priors
- Authors: Bamdad Hosseini, Ziqi Huang,
- Abstract summary: We present an analysis for such problems by presenting quantitative error bounds for minimum Wasserstein-2 generative models for the prior.<n>We show that under some assumptions, the error in the posterior due to the generative prior will inherit the same rate as the prior with respect to the Wasserstein-1 distance.
- Score: 9.276062058338443
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data-driven methods for the solution of inverse problems have become widely popular in recent years thanks to the rise of machine learning techniques. A popular approach concerns the training of a generative model on additional data to learn a bespoke prior for the problem at hand. In this article we present an analysis for such problems by presenting quantitative error bounds for minimum Wasserstein-2 generative models for the prior. We show that under some assumptions, the error in the posterior due to the generative prior will inherit the same rate as the prior with respect to the Wasserstein-1 distance. We further present numerical experiments that verify that aspects of our error analysis manifests in some benchmarks followed by an elliptic PDE inverse problem where a generative prior is used to model a non-stationary field.
Related papers
- On the Inverse Flow Matching Problem in the One-Dimensional and Gaussian Cases [58.228512978259026]
This paper studies the inverse problem of flow matching (FM) between distributions with finite exponential moment.<n>It is motivated by modern generative AI applications such as the distillation of flow matching models.
arXiv Detail & Related papers (2025-12-29T07:45:36Z) - Data-driven approaches to inverse problems [12.614421935598317]
Inverse problems serve as critical tools for visualizing internal structures beyond what is visible to the naked eye.<n>A more recent paradigm considers deriving solutions to inverse problems in a data-driven manner.<n>These notes offer an introduction to this data-driven paradigm for inverse problems.
arXiv Detail & Related papers (2025-06-13T12:44:32Z) - Bayesian Model Parameter Learning in Linear Inverse Problems: Application in EEG Focal Source Imaging [49.1574468325115]
Inverse problems can be described as limited-data problems in which the signal of interest cannot be observed directly.<n>We studied a linear inverse problem that included an unknown non-linear model parameter.<n>We utilized a Bayesian model-based learning approach that allowed signal recovery and subsequently estimation of the model parameter.
arXiv Detail & Related papers (2025-01-07T18:14:24Z) - Weak neural variational inference for solving Bayesian inverse problems without forward models: applications in elastography [1.6385815610837167]
We introduce a novel, data-driven approach for solving high-dimensional Bayesian inverse problems based on partial differential equations (PDEs)
The Weak Neural Variational Inference (WNVI) method complements real measurements with virtual observations derived from the physical model.
We demonstrate that WNVI is not only as accurate and more efficient than traditional methods that rely on repeatedly solving the (non-linear) forward problem as a black-box.
arXiv Detail & Related papers (2024-07-30T09:46:03Z) - Tackling the Problem of Distributional Shifts: Correcting Misspecified, High-Dimensional Data-Driven Priors for Inverse Problems [39.58317527488534]
In astrophysical applications, it is often difficult or even impossible to acquire independent and identically distributed samples from the underlying data-generating process of interest.<n>We propose addressing this issue by iteratively updating the population-level distributions by retraining the model with posterior samples from different sets of observations.<n>We show that, starting from a misspecified prior distribution, the updated distribution becomes progressively closer to the underlying population-level distribution.
arXiv Detail & Related papers (2024-07-24T22:39:27Z) - Gaussian processes for Bayesian inverse problems associated with linear
partial differential equations [0.8379286663107844]
This work is concerned with the use of Gaussian surrogate models for inverse problems associated with linear partial differential equations.
The type of Gaussian prior used is of critical importance with respect to how well the surrogate model will perform in terms of Bayesian inversion.
A number of different experiments illustrate the superiority of the PDE-informed Gaussian priors over more traditional priors.
arXiv Detail & Related papers (2023-07-17T09:31:26Z) - Solving Linear Inverse Problems Provably via Posterior Sampling with
Latent Diffusion Models [98.95988351420334]
We present the first framework to solve linear inverse problems leveraging pre-trained latent diffusion models.
We theoretically analyze our algorithm showing provable sample recovery in a linear model setting.
arXiv Detail & Related papers (2023-07-02T17:21:30Z) - Evaluating the Adversarial Robustness for Fourier Neural Operators [78.36413169647408]
Fourier Neural Operator (FNO) was the first to simulate turbulent flow with zero-shot super-resolution.
We generate adversarial examples for FNO based on norm-bounded data input perturbations.
Our results show that the model's robustness degrades rapidly with increasing perturbation levels.
arXiv Detail & Related papers (2022-04-08T19:19:42Z) - A Variational Inference Approach to Inverse Problems with Gamma
Hyperpriors [60.489902135153415]
This paper introduces a variational iterative alternating scheme for hierarchical inverse problems with gamma hyperpriors.
The proposed variational inference approach yields accurate reconstruction, provides meaningful uncertainty quantification, and is easy to implement.
arXiv Detail & Related papers (2021-11-26T06:33:29Z) - Total Deep Variation: A Stable Regularizer for Inverse Problems [71.90933869570914]
We introduce the data-driven general-purpose total deep variation regularizer.
In its core, a convolutional neural network extracts local features on multiple scales and in successive blocks.
We achieve state-of-the-art results for numerous imaging tasks.
arXiv Detail & Related papers (2020-06-15T21:54:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.