Reconstruct-and-Generate Diffusion Model for Detail-Preserving Image
Denoising
- URL: http://arxiv.org/abs/2309.10714v1
- Date: Tue, 19 Sep 2023 16:01:20 GMT
- Title: Reconstruct-and-Generate Diffusion Model for Detail-Preserving Image
Denoising
- Authors: Yujin Wang, Lingen Li, Tianfan Xue, Jinwei Gu
- Abstract summary: We propose a novel approach called the Reconstruct-and-Generate Diffusion Model (RnG)
Our method leverages a reconstructive denoising network to recover the majority of the underlying clean signal.
It employs a diffusion algorithm to generate residual high-frequency details, thereby enhancing visual quality.
- Score: 16.43285056788183
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image denoising is a fundamental and challenging task in the field of
computer vision. Most supervised denoising methods learn to reconstruct clean
images from noisy inputs, which have intrinsic spectral bias and tend to
produce over-smoothed and blurry images. Recently, researchers have explored
diffusion models to generate high-frequency details in image restoration tasks,
but these models do not guarantee that the generated texture aligns with real
images, leading to undesirable artifacts. To address the trade-off between
visual appeal and fidelity of high-frequency details in denoising tasks, we
propose a novel approach called the Reconstruct-and-Generate Diffusion Model
(RnG). Our method leverages a reconstructive denoising network to recover the
majority of the underlying clean signal, which serves as the initial estimation
for subsequent steps to maintain fidelity. Additionally, it employs a diffusion
algorithm to generate residual high-frequency details, thereby enhancing visual
quality. We further introduce a two-stage training scheme to ensure effective
collaboration between the reconstructive and generative modules of RnG. To
reduce undesirable texture introduced by the diffusion model, we also propose
an adaptive step controller that regulates the number of inverse steps applied
by the diffusion model, allowing control over the level of high-frequency
details added to each patch as well as saving the inference computational cost.
Through our proposed RnG, we achieve a better balance between perception and
distortion. We conducted extensive experiments on both synthetic and real
denoising datasets, validating the superiority of the proposed approach.
Related papers
- Gradient-Guided Conditional Diffusion Models for Private Image Reconstruction: Analyzing Adversarial Impacts of Differential Privacy and Denoising [21.30726250408398]
Current gradient-based reconstruction methods struggle with high-resolution images due to computational complexity and prior knowledge requirements.
We propose two novel methods that require minimal modifications to the diffusion model's generation process and eliminate the need for prior knowledge.
We conduct a comprehensive theoretical analysis of the impact of differential privacy noise on the quality of reconstructed images, revealing the relationship among noise magnitude, the architecture of attacked models, and the attacker's reconstruction capability.
arXiv Detail & Related papers (2024-11-05T12:39:21Z) - Ultrasound Imaging based on the Variance of a Diffusion Restoration Model [7.360352432782388]
We propose a hybrid reconstruction method combining an ultrasound linear direct model with a learning-based prior coming from a generative Denoising Diffusion model.
We conduct experiments on synthetic, in-vitro, and in-vivo data, demonstrating the efficacy of our variance imaging approach in achieving high-quality image reconstructions.
arXiv Detail & Related papers (2024-03-22T16:10:38Z) - Spatial-and-Frequency-aware Restoration method for Images based on
Diffusion Models [7.947387272047602]
We propose SaFaRI, a spatial-and-frequency-aware diffusion model for Image Restoration (IR)
Our model encourages images to preserve data-fidelity in both the spatial and frequency domains, resulting in enhanced reconstruction quality.
Our thorough evaluation demonstrates that SaFaRI achieves state-of-the-art performance on both the ImageNet datasets and FFHQ datasets.
arXiv Detail & Related papers (2024-01-31T07:11:01Z) - Resfusion: Denoising Diffusion Probabilistic Models for Image Restoration Based on Prior Residual Noise [34.65659277870287]
Research on denoising diffusion models has expanded its application to the field of image restoration.
We propose Resfusion, a framework that incorporates the residual term into the diffusion forward process.
We show that Resfusion exhibits competitive performance on ISTD dataset, LOL dataset and Raindrop dataset with only five sampling steps.
arXiv Detail & Related papers (2023-11-25T02:09:38Z) - Towards High-quality HDR Deghosting with Conditional Diffusion Models [88.83729417524823]
High Dynamic Range (LDR) images can be recovered from several Low Dynamic Range (LDR) images by existing Deep Neural Networks (DNNs) techniques.
DNNs still generate ghosting artifacts when LDR images have saturation and large motion.
We formulate the HDR deghosting problem as an image generation that leverages LDR features as the diffusion model's condition.
arXiv Detail & Related papers (2023-11-02T01:53:55Z) - Diffusion Reconstruction of Ultrasound Images with Informative
Uncertainty [5.375425938215277]
Enhancing ultrasound image quality involves balancing concurrent factors like contrast, resolution, and speckle preservation.
We propose a hybrid approach leveraging advances in diffusion models.
We conduct comprehensive experiments on simulated, in-vitro, and in-vivo data, demonstrating the efficacy of our approach.
arXiv Detail & Related papers (2023-10-31T16:51:40Z) - Gradpaint: Gradient-Guided Inpainting with Diffusion Models [71.47496445507862]
Denoising Diffusion Probabilistic Models (DDPMs) have recently achieved remarkable results in conditional and unconditional image generation.
We present GradPaint, which steers the generation towards a globally coherent image.
We generalizes well to diffusion models trained on various datasets, improving upon current state-of-the-art supervised and unsupervised methods.
arXiv Detail & Related papers (2023-09-18T09:36:24Z) - Steerable Conditional Diffusion for Out-of-Distribution Adaptation in Medical Image Reconstruction [75.91471250967703]
We introduce a novel sampling framework called Steerable Conditional Diffusion.
This framework adapts the diffusion model, concurrently with image reconstruction, based solely on the information provided by the available measurement.
We achieve substantial enhancements in out-of-distribution performance across diverse imaging modalities.
arXiv Detail & Related papers (2023-08-28T08:47:06Z) - Low-Light Image Enhancement with Wavelet-based Diffusion Models [50.632343822790006]
Diffusion models have achieved promising results in image restoration tasks, yet suffer from time-consuming, excessive computational resource consumption, and unstable restoration.
We propose a robust and efficient Diffusion-based Low-Light image enhancement approach, dubbed DiffLL.
arXiv Detail & Related papers (2023-06-01T03:08:28Z) - SAR Despeckling using a Denoising Diffusion Probabilistic Model [52.25981472415249]
The presence of speckle degrades the image quality and adversely affects the performance of SAR image understanding applications.
We introduce SAR-DDPM, a denoising diffusion probabilistic model for SAR despeckling.
The proposed method achieves significant improvements in both quantitative and qualitative results over the state-of-the-art despeckling methods.
arXiv Detail & Related papers (2022-06-09T14:00:26Z) - Dual Adversarial Network: Toward Real-world Noise Removal and Noise
Generation [52.75909685172843]
Real-world image noise removal is a long-standing yet very challenging task in computer vision.
We propose a novel unified framework to deal with the noise removal and noise generation tasks.
Our method learns the joint distribution of the clean-noisy image pairs.
arXiv Detail & Related papers (2020-07-12T09:16:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.