DDGM: Solving inverse problems by Diffusive Denoising of Gradient-based
Minimization
- URL: http://arxiv.org/abs/2307.04946v1
- Date: Tue, 11 Jul 2023 00:21:38 GMT
- Title: DDGM: Solving inverse problems by Diffusive Denoising of Gradient-based
Minimization
- Authors: Kyle Luther, H. Sebastian Seung
- Abstract summary: A recent trend is to train a convolutional net to denoise images, and use this net as a prior when solving the inverse problem.
Here we propose a simpler approach that combines the traditional gradient-based minimization of reconstruction error with denoising.
We show that high accuracy can be achieved with as few as 50 denoising steps.
- Score: 4.209801809583906
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Inverse problems generally require a regularizer or prior for a good
solution. A recent trend is to train a convolutional net to denoise images, and
use this net as a prior when solving the inverse problem. Several proposals
depend on a singular value decomposition of the forward operator, and several
others backpropagate through the denoising net at runtime. Here we propose a
simpler approach that combines the traditional gradient-based minimization of
reconstruction error with denoising. Noise is also added at each step, so the
iterative dynamics resembles a Langevin or diffusion process. Both the level of
added noise and the size of the denoising step decay exponentially with time.
We apply our method to the problem of tomographic reconstruction from electron
micrographs acquired at multiple tilt angles. With empirical studies using
simulated tilt views, we find parameter settings for our method that produce
good results. We show that high accuracy can be achieved with as few as 50
denoising steps. We also compare with DDRM and DPS, more complex diffusion
methods of the kinds mentioned above. These methods are less accurate (as
measured by MSE and SSIM) for our tomography problem, even after the generation
hyperparameters are optimized. Finally we extend our method to reconstruction
of arbitrary-sized images and show results on 128 $\times$ 1568 pixel images
Related papers
- Learning of Patch-Based Smooth-Plus-Sparse Models for Image Reconstruction [14.93489065234423]
We formulate the optimization as a bilevel problem.
We evaluate our method for denoising, super-resolution, and compressed-sensing magnetic-resonance imaging.
arXiv Detail & Related papers (2024-12-17T16:34:32Z) - How to Best Combine Demosaicing and Denoising? [16.921538543268216]
demosaicing and denoising play a critical role in the raw imaging pipeline.
Most demosaicing methods address the demosaicing of noise free images.
The real problem is to jointly denoise and demosaic noisy raw images.
arXiv Detail & Related papers (2024-08-13T07:23:53Z) - Improving Diffusion Inverse Problem Solving with Decoupled Noise Annealing [84.97865583302244]
We propose a new method called Decoupled Annealing Posterior Sampling (DAPS)
DAPS relies on a novel noise annealing process.
We demonstrate that DAPS significantly improves sample quality and stability across multiple image restoration tasks.
arXiv Detail & Related papers (2024-07-01T17:59:23Z) - Score Priors Guided Deep Variational Inference for Unsupervised
Real-World Single Image Denoising [14.486289176696438]
We propose a score priors-guided deep variational inference, namely ScoreDVI, for practical real-world denoising.
We exploit a Non-$i.i.d$ Gaussian mixture model and variational noise posterior to model the real-world noise.
Our method outperforms other single image-based real-world denoising methods and achieves comparable performance to dataset-based unsupervised methods.
arXiv Detail & Related papers (2023-08-09T03:26:58Z) - Simultaneous Image-to-Zero and Zero-to-Noise: Diffusion Models with Analytical Image Attenuation [53.04220377034574]
We propose incorporating an analytical image attenuation process into the forward diffusion process for high-quality (un)conditioned image generation.
Our method represents the forward image-to-noise mapping as simultaneous textitimage-to-zero mapping and textitzero-to-noise mapping.
We have conducted experiments on unconditioned image generation, textite.g., CIFAR-10 and CelebA-HQ-256, and image-conditioned downstream tasks such as super-resolution, saliency detection, edge detection, and image inpainting.
arXiv Detail & Related papers (2023-06-23T18:08:00Z) - Denoising Diffusion Restoration Models [110.1244240726802]
Denoising Diffusion Restoration Models (DDRM) is an efficient, unsupervised posterior sampling method.
We demonstrate DDRM's versatility on several image datasets for super-resolution, deblurring, inpainting, and colorization.
arXiv Detail & Related papers (2022-01-27T20:19:07Z) - On Measuring and Controlling the Spectral Bias of the Deep Image Prior [63.88575598930554]
The deep image prior has demonstrated the remarkable ability that untrained networks can address inverse imaging problems.
It requires an oracle to determine when to stop the optimization as the performance degrades after reaching a peak.
We study the deep image prior from a spectral bias perspective to address these problems.
arXiv Detail & Related papers (2021-07-02T15:10:42Z) - Unsupervised Single Image Super-resolution Under Complex Noise [60.566471567837574]
This paper proposes a model-based unsupervised SISR method to deal with the general SISR task with unknown degradations.
The proposed method can evidently surpass the current state of the art (SotA) method (about 1dB PSNR) not only with a slighter model (0.34M vs. 2.40M) but also faster speed.
arXiv Detail & Related papers (2021-07-02T11:55:40Z) - Graph Signal Restoration Using Nested Deep Algorithm Unrolling [85.53158261016331]
Graph signal processing is a ubiquitous task in many applications such as sensor, social transportation brain networks, point cloud processing, and graph networks.
We propose two restoration methods based on convexindependent deep ADMM (ADMM)
parameters in the proposed restoration methods are trainable in an end-to-end manner.
arXiv Detail & Related papers (2021-06-30T08:57:01Z) - Solving Inverse Problems with Hybrid Deep Image Priors: the challenge of
preventing overfitting [1.52292571922932]
We analyze and solve the overfitting problem of deep image prior (DIP)
Due to the large number of parameters of the neural network and noisy data, DIP overfits to the noise in the image as the number of iterations grows.
In the thesis, we use hybrid deep image priors to avoid overfitting.
arXiv Detail & Related papers (2020-11-03T14:50:53Z) - Solving Linear Inverse Problems Using the Prior Implicit in a Denoiser [7.7288480250888]
We develop a robust and general methodology for making use of implicit priors in deep neural networks.
A CNN trained to perform blind (i.e., with unknown noise level) least-squares denoising is presented.
A generalization of this algorithm to constrained sampling provides a method for using the implicit prior to solve any linear inverse problem.
arXiv Detail & Related papers (2020-07-27T15:40:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.