Inversion by Direct Iteration: An Alternative to Denoising Diffusion for
Image Restoration
- URL: http://arxiv.org/abs/2303.11435v5
- Date: Fri, 2 Feb 2024 18:52:51 GMT
- Title: Inversion by Direct Iteration: An Alternative to Denoising Diffusion for
Image Restoration
- Authors: Mauricio Delbracio and Peyman Milanfar
- Abstract summary: Inversion by Direct Iteration (InDI) is a new formulation for supervised image restoration.
It produces more realistic and detailed images than existing regression-based methods.
- Score: 22.709205282657617
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Inversion by Direct Iteration (InDI) is a new formulation for supervised
image restoration that avoids the so-called "regression to the mean" effect and
produces more realistic and detailed images than existing regression-based
methods. It does this by gradually improving image quality in small steps,
similar to generative denoising diffusion models. Image restoration is an
ill-posed problem where multiple high-quality images are plausible
reconstructions of a given low-quality input. Therefore, the outcome of a
single step regression model is typically an aggregate of all possible
explanations, therefore lacking details and realism. The main advantage of InDI
is that it does not try to predict the clean target image in a single step but
instead gradually improves the image in small steps, resulting in better
perceptual quality. While generative denoising diffusion models also work in
small steps, our formulation is distinct in that it does not require knowledge
of any analytic form of the degradation process. Instead, we directly learn an
iterative restoration process from low-quality and high-quality paired
examples. InDI can be applied to virtually any image degradation, given paired
training data. In conditional denoising diffusion image restoration the
denoising network generates the restored image by repeatedly denoising an
initial image of pure noise, conditioned on the degraded input. Contrary to
conditional denoising formulations, InDI directly proceeds by iteratively
restoring the input low-quality image, producing high-quality results on a
variety of image restoration tasks, including motion and out-of-focus
deblurring, super-resolution, compression artifact removal, and denoising.
Related papers
- Diff-Restorer: Unleashing Visual Prompts for Diffusion-based Universal Image Restoration [19.87693298262894]
We propose Diff-Restorer, a universal image restoration method based on the diffusion model.
We utilize the pre-trained visual language model to extract visual prompts from degraded images.
We also design a Degradation-aware Decoder to perform structural correction and convert the latent code to the pixel domain.
arXiv Detail & Related papers (2024-07-04T05:01:10Z) - Resfusion: Denoising Diffusion Probabilistic Models for Image Restoration Based on Prior Residual Noise [34.65659277870287]
Research on denoising diffusion models has expanded its application to the field of image restoration.
We propose Resfusion, a framework that incorporates the residual term into the diffusion forward process.
We show that Resfusion exhibits competitive performance on ISTD dataset, LOL dataset and Raindrop dataset with only five sampling steps.
arXiv Detail & Related papers (2023-11-25T02:09:38Z) - Reconstruct-and-Generate Diffusion Model for Detail-Preserving Image
Denoising [16.43285056788183]
We propose a novel approach called the Reconstruct-and-Generate Diffusion Model (RnG)
Our method leverages a reconstructive denoising network to recover the majority of the underlying clean signal.
It employs a diffusion algorithm to generate residual high-frequency details, thereby enhancing visual quality.
arXiv Detail & Related papers (2023-09-19T16:01:20Z) - Gradpaint: Gradient-Guided Inpainting with Diffusion Models [71.47496445507862]
Denoising Diffusion Probabilistic Models (DDPMs) have recently achieved remarkable results in conditional and unconditional image generation.
We present GradPaint, which steers the generation towards a globally coherent image.
We generalizes well to diffusion models trained on various datasets, improving upon current state-of-the-art supervised and unsupervised methods.
arXiv Detail & Related papers (2023-09-18T09:36:24Z) - Stimulating the Diffusion Model for Image Denoising via Adaptive Embedding and Ensembling [56.506240377714754]
We present a novel strategy called the Diffusion Model for Image Denoising (DMID)
Our strategy includes an adaptive embedding method that embeds the noisy image into a pre-trained unconditional diffusion model.
Our DMID strategy achieves state-of-the-art performance on both distortion-based and perception-based metrics.
arXiv Detail & Related papers (2023-07-08T14:59:41Z) - DR2: Diffusion-based Robust Degradation Remover for Blind Face
Restoration [66.01846902242355]
Blind face restoration usually synthesizes degraded low-quality data with a pre-defined degradation model for training.
It is expensive and infeasible to include every type of degradation to cover real-world cases in the training data.
We propose Robust Degradation Remover (DR2) to first transform the degraded image to a coarse but degradation-invariant prediction, then employ an enhancement module to restore the coarse prediction to a high-quality image.
arXiv Detail & Related papers (2023-03-13T06:05:18Z) - Multiscale Structure Guided Diffusion for Image Deblurring [24.09642909404091]
Diffusion Probabilistic Models (DPMs) have been employed for image deblurring.
We introduce a simple yet effective multiscale structure guidance as an implicit bias.
We demonstrate more robust deblurring results with fewer artifacts on unseen data.
arXiv Detail & Related papers (2022-12-04T10:40:35Z) - Invertible Rescaling Network and Its Extensions [118.72015270085535]
In this work, we propose a novel invertible framework to model the bidirectional degradation and restoration from a new perspective.
We develop invertible models to generate valid degraded images and transform the distribution of lost contents.
Then restoration is made tractable by applying the inverse transformation on the generated degraded image together with a randomly-drawn latent variable.
arXiv Detail & Related papers (2022-10-09T06:58:58Z) - Dynamic Dual-Output Diffusion Models [100.32273175423146]
Iterative denoising-based generation has been shown to be comparable in quality to other classes of generative models.
A major drawback of this method is that it requires hundreds of iterations to produce a competitive result.
Recent works have proposed solutions that allow for faster generation with fewer iterations, but the image quality gradually deteriorates.
arXiv Detail & Related papers (2022-03-08T11:20:40Z) - Invertible Image Rescaling [118.2653765756915]
We develop an Invertible Rescaling Net (IRN) to produce visually-pleasing low-resolution images.
We capture the distribution of the lost information using a latent variable following a specified distribution in the downscaling process.
arXiv Detail & Related papers (2020-05-12T09:55:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.