Deblurring via Stochastic Refinement
- URL: http://arxiv.org/abs/2112.02475v1
- Date: Sun, 5 Dec 2021 04:36:09 GMT
- Title: Deblurring via Stochastic Refinement
- Authors: Jay Whang, Mauricio Delbracio, Hossein Talebi, Chitwan Saharia,
Alexandros G. Dimakis, Peyman Milanfar
- Abstract summary: We present an alternative framework for blind deblurring based on conditional diffusion models.
Our method is competitive in terms of distortion metrics such as PSNR.
- Score: 85.42730934561101
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image deblurring is an ill-posed problem with multiple plausible solutions
for a given input image. However, most existing methods produce a deterministic
estimate of the clean image and are trained to minimize pixel-level distortion.
These metrics are known to be poorly correlated with human perception, and
often lead to unrealistic reconstructions. We present an alternative framework
for blind deblurring based on conditional diffusion models. Unlike existing
techniques, we train a stochastic sampler that refines the output of a
deterministic predictor and is capable of producing a diverse set of plausible
reconstructions for a given input. This leads to a significant improvement in
perceptual quality over existing state-of-the-art methods across multiple
standard benchmarks. Our predict-and-refine approach also enables much more
efficient sampling compared to typical diffusion models. Combined with a
carefully tuned network architecture and inference procedure, our method is
competitive in terms of distortion metrics such as PSNR. These results show
clear benefits of our diffusion-based method for deblurring and challenge the
widely used strategy of producing a single, deterministic reconstruction.
Related papers
- Observation-Guided Diffusion Probabilistic Models [41.749374023639156]
We propose a novel diffusion-based image generation method called the observation-guided diffusion probabilistic model (OGDM)
Our approach reestablishes the training objective by integrating the guidance of the observation process with the Markov chain.
We demonstrate the effectiveness of our training algorithm using diverse inference techniques on strong diffusion model baselines.
arXiv Detail & Related papers (2023-10-06T06:29:06Z) - Fast Diffusion EM: a diffusion model for blind inverse problems with
application to deconvolution [0.0]
Current methods assume the degradation to be known and provide impressive results in terms of restoration and diversity.
In this work, we leverage the efficiency of those models to jointly estimate the restored image and unknown parameters of the kernel model.
Our method alternates between approximating the expected log-likelihood of the problem using samples drawn from a diffusion model and a step to estimate unknown model parameters.
arXiv Detail & Related papers (2023-09-01T06:47:13Z) - ExposureDiffusion: Learning to Expose for Low-light Image Enhancement [87.08496758469835]
This work addresses the issue by seamlessly integrating a diffusion model with a physics-based exposure model.
Our method obtains significantly improved performance and reduced inference time compared with vanilla diffusion models.
The proposed framework can work with both real-paired datasets, SOTA noise models, and different backbone networks.
arXiv Detail & Related papers (2023-07-15T04:48:35Z) - Exploiting Diffusion Prior for Real-World Image Super-Resolution [75.5898357277047]
We present a novel approach to leverage prior knowledge encapsulated in pre-trained text-to-image diffusion models for blind super-resolution.
By employing our time-aware encoder, we can achieve promising restoration results without altering the pre-trained synthesis model.
arXiv Detail & Related papers (2023-05-11T17:55:25Z) - A Variational Perspective on Solving Inverse Problems with Diffusion
Models [101.831766524264]
Inverse tasks can be formulated as inferring a posterior distribution over data.
This is however challenging in diffusion models since the nonlinear and iterative nature of the diffusion process renders the posterior intractable.
We propose a variational approach that by design seeks to approximate the true posterior distribution.
arXiv Detail & Related papers (2023-05-07T23:00:47Z) - Reflected Diffusion Models [93.26107023470979]
We present Reflected Diffusion Models, which reverse a reflected differential equation evolving on the support of the data.
Our approach learns the score function through a generalized score matching loss and extends key components of standard diffusion models.
arXiv Detail & Related papers (2023-04-10T17:54:38Z) - Conditional Variational Autoencoder for Learned Image Reconstruction [5.487951901731039]
We develop a novel framework that approximates the posterior distribution of the unknown image at each query observation.
It handles implicit noise models and priors, it incorporates the data formation process (i.e., the forward operator), and the learned reconstructive properties are transferable between different datasets.
arXiv Detail & Related papers (2021-10-22T10:02:48Z) - Uncertainty-aware Generalized Adaptive CycleGAN [44.34422859532988]
Unpaired image-to-image translation refers to learning inter-image-domain mapping in an unsupervised manner.
Existing methods often learn deterministic mappings without explicitly modelling the robustness to outliers or predictive uncertainty.
We propose a novel probabilistic method called Uncertainty-aware Generalized Adaptive Cycle Consistency (UGAC)
arXiv Detail & Related papers (2021-02-23T15:22:35Z) - Deep Variational Network Toward Blind Image Restoration [60.45350399661175]
Blind image restoration is a common yet challenging problem in computer vision.
We propose a novel blind image restoration method, aiming to integrate both the advantages of them.
Experiments on two typical blind IR tasks, namely image denoising and super-resolution, demonstrate that the proposed method achieves superior performance over current state-of-the-arts.
arXiv Detail & Related papers (2020-08-25T03:30:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.