Variational Gaussian Processes For Linear Inverse Problems
- URL: http://arxiv.org/abs/2311.00663v1
- Date: Wed, 1 Nov 2023 17:10:38 GMT
- Title: Variational Gaussian Processes For Linear Inverse Problems
- Authors: Thibault Randrianarisoa and Botond Szabo
- Abstract summary: In inverse problems the parameter or signal of interest is observed only indirectly, as an image of a given map, and the observations are typically corrupted with noise.
Bayes offers a natural way to regularize these problems via the prior distribution and provides a probabilistic solution, quantifying the remaining uncertainty in the problem.
We consider a collection of inverse problems including the heat equation, Volterra operator and Radon transform and inducing variable methods based on population and empirical spectral features.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: By now Bayesian methods are routinely used in practice for solving inverse
problems. In inverse problems the parameter or signal of interest is observed
only indirectly, as an image of a given map, and the observations are typically
further corrupted with noise. Bayes offers a natural way to regularize these
problems via the prior distribution and provides a probabilistic solution,
quantifying the remaining uncertainty in the problem. However, the
computational costs of standard, sampling based Bayesian approaches can be
overly large in such complex models. Therefore, in practice variational Bayes
is becoming increasingly popular. Nevertheless, the theoretical understanding
of these methods is still relatively limited, especially in context of inverse
problems. In our analysis we investigate variational Bayesian methods for
Gaussian process priors to solve linear inverse problems. We consider both
mildly and severely ill-posed inverse problems and work with the popular
inducing variables variational Bayes approach proposed by Titsias in 2009. We
derive posterior contraction rates for the variational posterior in general
settings and show that the minimax estimation rate can be attained by correctly
tunned procedures. As specific examples we consider a collection of inverse
problems including the heat equation, Volterra operator and Radon transform and
inducing variable methods based on population and empirical spectral features.
Related papers
- Weak neural variational inference for solving Bayesian inverse problems without forward models: applications in elastography [1.6385815610837167]
We introduce a novel, data-driven approach for solving high-dimensional Bayesian inverse problems based on partial differential equations (PDEs)
The Weak Neural Variational Inference (WNVI) method complements real measurements with virtual observations derived from the physical model.
We demonstrate that WNVI is not only as accurate and more efficient than traditional methods that rely on repeatedly solving the (non-linear) forward problem as a black-box.
arXiv Detail & Related papers (2024-07-30T09:46:03Z) - Diffusion Prior-Based Amortized Variational Inference for Noisy Inverse Problems [12.482127049881026]
We propose a novel approach to solve inverse problems with a diffusion prior from an amortized variational inference perspective.
Our amortized inference learns a function that directly maps measurements to the implicit posterior distributions of corresponding clean data, enabling a single-step posterior sampling even for unseen measurements.
arXiv Detail & Related papers (2024-07-23T02:14:18Z) - Improving Diffusion Models for Inverse Problems Using Optimal Posterior Covariance [52.093434664236014]
Recent diffusion models provide a promising zero-shot solution to noisy linear inverse problems without retraining for specific inverse problems.
Inspired by this finding, we propose to improve recent methods by using more principled covariance determined by maximum likelihood estimation.
arXiv Detail & Related papers (2024-02-03T13:35:39Z) - Monte Carlo guided Diffusion for Bayesian linear inverse problems [16.45956951465261]
We exploit the particular structure of the prior to define a sequence of intermediate linear inverse problems.
As the noise level decreases, the posteriors of these inverse problems get closer to the target posterior of the original inverse problem.
The proposed algorithm, MCGDiff, is shown to be theoretically grounded and we provide numerical simulations showing that it outperforms competing baselines.
arXiv Detail & Related papers (2023-08-15T18:32:00Z) - Inverse Models for Estimating the Initial Condition of Spatio-Temporal
Advection-Diffusion Processes [5.814371485767541]
Inverse problems involve making inference about unknown parameters of a physical process using observational data.
This paper investigates the estimation of the initial condition of a-temporal advection-diffusion process using spatially sparse data streams.
arXiv Detail & Related papers (2023-02-08T15:30:16Z) - Diffusion Posterior Sampling for General Noisy Inverse Problems [50.873313752797124]
We extend diffusion solvers to handle noisy (non)linear inverse problems via approximation of the posterior sampling.
Our method demonstrates that diffusion models can incorporate various measurement noise statistics.
arXiv Detail & Related papers (2022-09-29T11:12:27Z) - Improving Diffusion Models for Inverse Problems using Manifold Constraints [55.91148172752894]
We show that current solvers throw the sample path off the data manifold, and hence the error accumulates.
To address this, we propose an additional correction term inspired by the manifold constraint.
We show that our method is superior to the previous methods both theoretically and empirically.
arXiv Detail & Related papers (2022-06-02T09:06:10Z) - Natural Gradient Variational Inference with Gaussian Mixture Models [1.7948767405202701]
Variational Inference (VI) methods approximate the posterior with a distribution usually chosen from a simple family using optimization.
The main contribution of this work is described is a set of update rules for natural gradient variational inference with mixture of Gaussians.
arXiv Detail & Related papers (2021-11-15T20:04:32Z) - Pathwise Conditioning of Gaussian Processes [72.61885354624604]
Conventional approaches for simulating Gaussian process posteriors view samples as draws from marginal distributions of process values at finite sets of input locations.
This distribution-centric characterization leads to generative strategies that scale cubically in the size of the desired random vector.
We show how this pathwise interpretation of conditioning gives rise to a general family of approximations that lend themselves to efficiently sampling Gaussian process posteriors.
arXiv Detail & Related papers (2020-11-08T17:09:37Z) - Total Deep Variation: A Stable Regularizer for Inverse Problems [71.90933869570914]
We introduce the data-driven general-purpose total deep variation regularizer.
In its core, a convolutional neural network extracts local features on multiple scales and in successive blocks.
We achieve state-of-the-art results for numerous imaging tasks.
arXiv Detail & Related papers (2020-06-15T21:54:15Z) - Solving Inverse Problems with a Flow-based Noise Model [100.18560761392692]
We study image inverse problems with a normalizing flow prior.
Our formulation views the solution as the maximum a posteriori estimate of the image conditioned on the measurements.
We empirically validate the efficacy of our method on various inverse problems, including compressed sensing with quantized measurements and denoising with highly structured noise patterns.
arXiv Detail & Related papers (2020-03-18T08:33:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.