SNIPS: Solving Noisy Inverse Problems Stochastically
- URL: http://arxiv.org/abs/2105.14951v1
- Date: Mon, 31 May 2021 13:33:21 GMT
- Title: SNIPS: Solving Noisy Inverse Problems Stochastically
- Authors: Bahjat Kawar, Gregory Vaksman, Michael Elad
- Abstract summary: We introduce a novel algorithm dubbed SNIPS, which draws samples from the posterior distribution of any linear inverse problem.
Our solution incorporates ideas from Langevin dynamics and Newton's method, and exploits a pre-trained minimum mean squared error (MMSE)
We show that the samples produced are sharp, detailed and consistent with the given measurements, and their diversity exposes the inherent uncertainty in the inverse problem being solved.
- Score: 25.567566997688044
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work we introduce a novel stochastic algorithm dubbed SNIPS, which
draws samples from the posterior distribution of any linear inverse problem,
where the observation is assumed to be contaminated by additive white Gaussian
noise. Our solution incorporates ideas from Langevin dynamics and Newton's
method, and exploits a pre-trained minimum mean squared error (MMSE) Gaussian
denoiser. The proposed approach relies on an intricate derivation of the
posterior score function that includes a singular value decomposition (SVD) of
the degradation operator, in order to obtain a tractable iterative algorithm
for the desired sampling. Due to its stochasticity, the algorithm can produce
multiple high perceptual quality samples for the same noisy observation. We
demonstrate the abilities of the proposed paradigm for image deblurring,
super-resolution, and compressive sensing. We show that the samples produced
are sharp, detailed and consistent with the given measurements, and their
diversity exposes the inherent uncertainty in the inverse problem being solved.
Related papers
- Total Uncertainty Quantification in Inverse PDE Solutions Obtained with Reduced-Order Deep Learning Surrogate Models [50.90868087591973]
We propose an approximate Bayesian method for quantifying the total uncertainty in inverse PDE solutions obtained with machine learning surrogate models.
We test the proposed framework by comparing it with the iterative ensemble smoother and deep ensembling methods for a non-linear diffusion equation.
arXiv Detail & Related papers (2024-08-20T19:06:02Z) - Noise-Free Sampling Algorithms via Regularized Wasserstein Proximals [3.4240632942024685]
We consider the problem of sampling from a distribution governed by a potential function.
This work proposes an explicit score based MCMC method that is deterministic, resulting in a deterministic evolution for particles.
arXiv Detail & Related papers (2023-08-28T23:51:33Z) - Solving Linear Inverse Problems Provably via Posterior Sampling with
Latent Diffusion Models [98.95988351420334]
We present the first framework to solve linear inverse problems leveraging pre-trained latent diffusion models.
We theoretically analyze our algorithm showing provable sample recovery in a linear model setting.
arXiv Detail & Related papers (2023-07-02T17:21:30Z) - Learning Rate Free Sampling in Constrained Domains [21.853333421463603]
We introduce a suite of new particle-based algorithms for sampling in constrained domains which are entirely learning rate free.
We demonstrate the performance of our algorithms on a range of numerical examples, including sampling from targets on the simplex.
arXiv Detail & Related papers (2023-05-24T09:31:18Z) - A Variational Perspective on Solving Inverse Problems with Diffusion
Models [101.831766524264]
Inverse tasks can be formulated as inferring a posterior distribution over data.
This is however challenging in diffusion models since the nonlinear and iterative nature of the diffusion process renders the posterior intractable.
We propose a variational approach that by design seeks to approximate the true posterior distribution.
arXiv Detail & Related papers (2023-05-07T23:00:47Z) - Optimal Algorithms for the Inhomogeneous Spiked Wigner Model [89.1371983413931]
We derive an approximate message-passing algorithm (AMP) for the inhomogeneous problem.
We identify in particular the existence of a statistical-to-computational gap where known algorithms require a signal-to-noise ratio bigger than the information-theoretic threshold to perform better than random.
arXiv Detail & Related papers (2023-02-13T19:57:17Z) - Diffusion Posterior Sampling for General Noisy Inverse Problems [50.873313752797124]
We extend diffusion solvers to handle noisy (non)linear inverse problems via approximation of the posterior sampling.
Our method demonstrates that diffusion models can incorporate various measurement noise statistics.
arXiv Detail & Related papers (2022-09-29T11:12:27Z) - Stochastic Image Denoising by Sampling from the Posterior Distribution [25.567566997688044]
We propose a novel denoising approach that produces viable and high quality results, while maintaining a small MSE.
Our method employs Langevin dynamics that relies on a repeated application of any given MMSE denoiser, obtaining the reconstructed image by effectively sampling from the posterior distribution.
Due to its perceptuality, the proposed algorithm can produce a variety of high-quality outputs for a given noisy input, all shown to be legitimate denoising results.
arXiv Detail & Related papers (2021-01-23T18:28:19Z) - Solving Linear Inverse Problems Using the Prior Implicit in a Denoiser [7.7288480250888]
We develop a robust and general methodology for making use of implicit priors in deep neural networks.
A CNN trained to perform blind (i.e., with unknown noise level) least-squares denoising is presented.
A generalization of this algorithm to constrained sampling provides a method for using the implicit prior to solve any linear inverse problem.
arXiv Detail & Related papers (2020-07-27T15:40:46Z) - Analysis and Design of Thompson Sampling for Stochastic Partial
Monitoring [91.22679787578438]
We present a novel Thompson-sampling-based algorithm for partial monitoring.
We prove that the new algorithm achieves the logarithmic problem-dependent expected pseudo-regret $mathrmO(log T)$ for a linearized variant of the problem with local observability.
arXiv Detail & Related papers (2020-06-17T05:48:33Z) - Sparse recovery by reduced variance stochastic approximation [5.672132510411465]
We discuss application of iterative quadratic optimization routines to the problem of sparse signal recovery from noisy observation.
We show how one can straightforwardly enhance reliability of the corresponding solution by using Median-of-Means like techniques.
arXiv Detail & Related papers (2020-06-11T12:31:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.