Solving Linear Inverse Problems Using the Prior Implicit in a Denoiser
- URL: http://arxiv.org/abs/2007.13640v3
- Date: Fri, 7 May 2021 02:34:03 GMT
- Title: Solving Linear Inverse Problems Using the Prior Implicit in a Denoiser
- Authors: Zahra Kadkhodaie and Eero P. Simoncelli
- Abstract summary: We develop a robust and general methodology for making use of implicit priors in deep neural networks.
A CNN trained to perform blind (i.e., with unknown noise level) least-squares denoising is presented.
A generalization of this algorithm to constrained sampling provides a method for using the implicit prior to solve any linear inverse problem.
- Score: 7.7288480250888
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Prior probability models are a fundamental component of many image processing
problems, but density estimation is notoriously difficult for high-dimensional
signals such as photographic images. Deep neural networks have provided
state-of-the-art solutions for problems such as denoising, which implicitly
rely on a prior probability model of natural images. Here, we develop a robust
and general methodology for making use of this implicit prior. We rely on a
statistical result due to Miyasawa (1961), who showed that the least-squares
solution for removing additive Gaussian noise can be written directly in terms
of the gradient of the log of the noisy signal density. We use this fact to
develop a stochastic coarse-to-fine gradient ascent procedure for drawing
high-probability samples from the implicit prior embedded within a CNN trained
to perform blind (i.e., with unknown noise level) least-squares denoising. A
generalization of this algorithm to constrained sampling provides a method for
using the implicit prior to solve any linear inverse problem, with no
additional training. We demonstrate this general form of transfer learning in
multiple applications, using the same algorithm to produce state-of-the-art
levels of unsupervised performance for deblurring, super-resolution,
inpainting, and compressive sensing.
Related papers
- Gradpaint: Gradient-Guided Inpainting with Diffusion Models [71.47496445507862]
Denoising Diffusion Probabilistic Models (DDPMs) have recently achieved remarkable results in conditional and unconditional image generation.
We present GradPaint, which steers the generation towards a globally coherent image.
We generalizes well to diffusion models trained on various datasets, improving upon current state-of-the-art supervised and unsupervised methods.
arXiv Detail & Related papers (2023-09-18T09:36:24Z) - Score Priors Guided Deep Variational Inference for Unsupervised
Real-World Single Image Denoising [14.486289176696438]
We propose a score priors-guided deep variational inference, namely ScoreDVI, for practical real-world denoising.
We exploit a Non-$i.i.d$ Gaussian mixture model and variational noise posterior to model the real-world noise.
Our method outperforms other single image-based real-world denoising methods and achieves comparable performance to dataset-based unsupervised methods.
arXiv Detail & Related papers (2023-08-09T03:26:58Z) - Solving Linear Inverse Problems Provably via Posterior Sampling with
Latent Diffusion Models [98.95988351420334]
We present the first framework to solve linear inverse problems leveraging pre-trained latent diffusion models.
We theoretically analyze our algorithm showing provable sample recovery in a linear model setting.
arXiv Detail & Related papers (2023-07-02T17:21:30Z) - A Variational Perspective on Solving Inverse Problems with Diffusion
Models [101.831766524264]
Inverse tasks can be formulated as inferring a posterior distribution over data.
This is however challenging in diffusion models since the nonlinear and iterative nature of the diffusion process renders the posterior intractable.
We propose a variational approach that by design seeks to approximate the true posterior distribution.
arXiv Detail & Related papers (2023-05-07T23:00:47Z) - Learning Gradually Non-convex Image Priors Using Score Matching [16.10747769038211]
We propose a unified framework of denoising sufficiently large models in the context of graduated non-ity problems.
These prior learnings can be incorporated into existing algorithms for solving inverse problems.
arXiv Detail & Related papers (2023-02-21T08:02:03Z) - Representing Noisy Image Without Denoising [91.73819173191076]
Fractional-order Moments in Radon space (FMR) is designed to derive robust representation directly from noisy images.
Unlike earlier integer-order methods, our work is a more generic design taking such classical methods as special cases.
arXiv Detail & Related papers (2023-01-18T10:13:29Z) - Poisson2Sparse: Self-Supervised Poisson Denoising From a Single Image [34.27748767631027]
We present a novel self-supervised learning method for single-image denoising.
We approximate traditional iterative optimization algorithms for image denoising with a recurrent neural network.
Our method outperforms the state-of-the-art approaches in terms of PSNR and SSIM.
arXiv Detail & Related papers (2022-06-04T00:08:58Z) - SNIPS: Solving Noisy Inverse Problems Stochastically [25.567566997688044]
We introduce a novel algorithm dubbed SNIPS, which draws samples from the posterior distribution of any linear inverse problem.
Our solution incorporates ideas from Langevin dynamics and Newton's method, and exploits a pre-trained minimum mean squared error (MMSE)
We show that the samples produced are sharp, detailed and consistent with the given measurements, and their diversity exposes the inherent uncertainty in the inverse problem being solved.
arXiv Detail & Related papers (2021-05-31T13:33:21Z) - Stochastic Image Denoising by Sampling from the Posterior Distribution [25.567566997688044]
We propose a novel denoising approach that produces viable and high quality results, while maintaining a small MSE.
Our method employs Langevin dynamics that relies on a repeated application of any given MMSE denoiser, obtaining the reconstructed image by effectively sampling from the posterior distribution.
Due to its perceptuality, the proposed algorithm can produce a variety of high-quality outputs for a given noisy input, all shown to be legitimate denoising results.
arXiv Detail & Related papers (2021-01-23T18:28:19Z) - Deep Variational Network Toward Blind Image Restoration [60.45350399661175]
Blind image restoration is a common yet challenging problem in computer vision.
We propose a novel blind image restoration method, aiming to integrate both the advantages of them.
Experiments on two typical blind IR tasks, namely image denoising and super-resolution, demonstrate that the proposed method achieves superior performance over current state-of-the-arts.
arXiv Detail & Related papers (2020-08-25T03:30:53Z) - Variational Denoising Network: Toward Blind Noise Modeling and Removal [59.36166491196973]
Blind image denoising is an important yet very challenging problem in computer vision.
We propose a new variational inference method, which integrates both noise estimation and image denoising.
arXiv Detail & Related papers (2019-08-29T15:54:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.