Image Restoration by Denoising Diffusion Models with Iteratively Preconditioned Guidance
- URL: http://arxiv.org/abs/2312.16519v2
- Date: Sun, 14 Apr 2024 17:56:49 GMT
- Title: Image Restoration by Denoising Diffusion Models with Iteratively Preconditioned Guidance
- Authors: Tomer Garber, Tom Tirer,
- Abstract summary: Training deep neural networks has become a common approach for addressing image restoration problems.
In low-noise settings, guidance that is based on back-projection (BP) has been shown to be a promising strategy.
We propose a novel guidance technique, based on preconditioning that allows traversing from BP-based guidance to at least squares based guidance.
- Score: 9.975341265604577
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Training deep neural networks has become a common approach for addressing image restoration problems. An alternative for training a "task-specific" network for each observation model is to use pretrained deep denoisers for imposing only the signal's prior within iterative algorithms, without additional training. Recently, a sampling-based variant of this approach has become popular with the rise of diffusion/score-based generative models. Using denoisers for general purpose restoration requires guiding the iterations to ensure agreement of the signal with the observations. In low-noise settings, guidance that is based on back-projection (BP) has been shown to be a promising strategy (used recently also under the names "pseudoinverse" or "range/null-space" guidance). However, the presence of noise in the observations hinders the gains from this approach. In this paper, we propose a novel guidance technique, based on preconditioning that allows traversing from BP-based guidance to least squares based guidance along the restoration scheme. The proposed approach is robust to noise while still having much simpler implementation than alternative methods (e.g., it does not require SVD or a large number of iterations). We use it within both an optimization scheme and a sampling-based scheme, and demonstrate its advantages over existing methods for image deblurring and super-resolution.
Related papers
- Divide-and-Conquer Posterior Sampling for Denoising Diffusion Priors [21.0128625037708]
We present an innovative framework, divide-and-conquer posterior sampling.
It reduces the approximation error associated with current techniques without the need for retraining.
We demonstrate the versatility and effectiveness of our approach for a wide range of Bayesian inverse problems.
arXiv Detail & Related papers (2024-03-18T01:47:24Z) - Plug-and-Play image restoration with Stochastic deNOising REgularization [8.678250057211368]
We propose a new framework called deNOising REgularization (SNORE)
SNORE applies the denoiser only to images with noise of the adequate level.
It is based on an explicit regularization, which leads to a descent to solve inverse problems.
arXiv Detail & Related papers (2024-02-01T18:05:47Z) - Direct Unsupervised Denoising [60.71146161035649]
Unsupervised denoisers do not directly produce a single prediction, such as the MMSE estimate.
We present an alternative approach that trains a deterministic network alongside the VAE to directly predict a central tendency.
arXiv Detail & Related papers (2023-10-27T13:02:12Z) - Domain Generalization Guided by Gradient Signal to Noise Ratio of
Parameters [69.24377241408851]
Overfitting to the source domain is a common issue in gradient-based training of deep neural networks.
We propose to base the selection on gradient-signal-to-noise ratio (GSNR) of network's parameters.
arXiv Detail & Related papers (2023-10-11T10:21:34Z) - Score Priors Guided Deep Variational Inference for Unsupervised
Real-World Single Image Denoising [14.486289176696438]
We propose a score priors-guided deep variational inference, namely ScoreDVI, for practical real-world denoising.
We exploit a Non-$i.i.d$ Gaussian mixture model and variational noise posterior to model the real-world noise.
Our method outperforms other single image-based real-world denoising methods and achieves comparable performance to dataset-based unsupervised methods.
arXiv Detail & Related papers (2023-08-09T03:26:58Z) - A Variational Perspective on Solving Inverse Problems with Diffusion
Models [101.831766524264]
Inverse tasks can be formulated as inferring a posterior distribution over data.
This is however challenging in diffusion models since the nonlinear and iterative nature of the diffusion process renders the posterior intractable.
We propose a variational approach that by design seeks to approximate the true posterior distribution.
arXiv Detail & Related papers (2023-05-07T23:00:47Z) - Enhancing convolutional neural network generalizability via low-rank weight approximation [6.763245393373041]
Sufficient denoising is often an important first step for image processing.
Deep neural networks (DNNs) have been widely used for image denoising.
We introduce a new self-supervised framework for image denoising based on the Tucker low-rank tensor approximation.
arXiv Detail & Related papers (2022-09-26T14:11:05Z) - Learning Sparsity-Promoting Regularizers using Bilevel Optimization [9.18465987536469]
We present a method for supervised learning of sparsity-promoting regularizers for denoising signals and images.
Experiments with structured 1D signals and natural images show that the proposed method can learn an operator that outperforms well-known regularizers.
arXiv Detail & Related papers (2022-07-18T20:50:02Z) - Denoising Diffusion Restoration Models [110.1244240726802]
Denoising Diffusion Restoration Models (DDRM) is an efficient, unsupervised posterior sampling method.
We demonstrate DDRM's versatility on several image datasets for super-resolution, deblurring, inpainting, and colorization.
arXiv Detail & Related papers (2022-01-27T20:19:07Z) - Deep Variational Network Toward Blind Image Restoration [60.45350399661175]
Blind image restoration is a common yet challenging problem in computer vision.
We propose a novel blind image restoration method, aiming to integrate both the advantages of them.
Experiments on two typical blind IR tasks, namely image denoising and super-resolution, demonstrate that the proposed method achieves superior performance over current state-of-the-arts.
arXiv Detail & Related papers (2020-08-25T03:30:53Z) - Learning Noise-Aware Encoder-Decoder from Noisy Labels by Alternating
Back-Propagation for Saliency Detection [54.98042023365694]
We propose a noise-aware encoder-decoder framework to disentangle a clean saliency predictor from noisy training examples.
The proposed model consists of two sub-models parameterized by neural networks.
arXiv Detail & Related papers (2020-07-23T18:47:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.