ShadowDiffusion: When Degradation Prior Meets Diffusion Model for Shadow
Removal
- URL: http://arxiv.org/abs/2212.04711v2
- Date: Tue, 13 Dec 2022 08:56:31 GMT
- Title: ShadowDiffusion: When Degradation Prior Meets Diffusion Model for Shadow
Removal
- Authors: Lanqing Guo, Chong Wang, Wenhan Yang, Siyu Huang, Yufei Wang,
Hanspeter Pfister, Bihan Wen
- Abstract summary: We propose a unified diffusion framework that integrates both the image and degradation priors for highly effective shadow removal.
Our model achieves a significant improvement in terms of PSNR, increasing from 31.69dB to 34.73dB over SRD dataset.
- Score: 74.86415440438051
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent deep learning methods have achieved promising results in image shadow
removal. However, their restored images still suffer from unsatisfactory
boundary artifacts, due to the lack of degradation prior embedding and the
deficiency in modeling capacity. Our work addresses these issues by proposing a
unified diffusion framework that integrates both the image and degradation
priors for highly effective shadow removal. In detail, we first propose a
shadow degradation model, which inspires us to build a novel unrolling
diffusion model, dubbed ShandowDiffusion. It remarkably improves the model's
capacity in shadow removal via progressively refining the desired output with
both degradation prior and diffusive generative prior, which by nature can
serve as a new strong baseline for image restoration. Furthermore,
ShadowDiffusion progressively refines the estimated shadow mask as an auxiliary
task of the diffusion generator, which leads to more accurate and robust
shadow-free image generation. We conduct extensive experiments on three popular
public datasets, including ISTD, ISTD+, and SRD, to validate our method's
effectiveness. Compared to the state-of-the-art methods, our model achieves a
significant improvement in terms of PSNR, increasing from 31.69dB to 34.73dB
over SRD dataset.
Related papers
- Frequency-Aware Guidance for Blind Image Restoration via Diffusion Models [20.898262207229873]
Blind image restoration remains a significant challenge in low-level vision tasks.
Guided diffusion models have achieved promising results in blind image restoration.
We propose a novel frequency-aware guidance loss that can be integrated into various diffusion models in a plug-and-play manner.
arXiv Detail & Related papers (2024-11-19T12:18:16Z) - Generative Portrait Shadow Removal [27.98144439007323]
We introduce a high-fidelity portrait shadow removal model that can effectively enhance the image of a portrait.
Our method also demonstrates robustness to diverse subjects captured in real environments.
arXiv Detail & Related papers (2024-10-07T22:09:22Z) - Drantal-NeRF: Diffusion-Based Restoration for Anti-aliasing Neural Radiance Field [10.225323718645022]
Aliasing artifacts in renderings produced by Neural Radiance Field (NeRF) is a long-standing but complex issue.
We present a Diffusion-based restoration method for anti-aliasing Neural Radiance Field (Drantal-NeRF)
arXiv Detail & Related papers (2024-07-10T08:32:13Z) - Latent Feature-Guided Diffusion Models for Shadow Removal [50.02857194218859]
We propose the use of diffusion models as they offer a promising approach to gradually refine the details of shadow regions during the diffusion process.
Our method improves this process by conditioning on a learned latent feature space that inherits the characteristics of shadow-free images.
We demonstrate the effectiveness of our approach which outperforms the previous best method by 13% in terms of RMSE on the AISTD dataset.
arXiv Detail & Related papers (2023-12-04T18:59:55Z) - Progressive Recurrent Network for Shadow Removal [99.1928825224358]
Single-image shadow removal is a significant task that is still unresolved.
Most existing deep learning-based approaches attempt to remove the shadow directly, which can not deal with the shadow well.
We propose a simple but effective Progressive Recurrent Network (PRNet) to remove the shadow progressively.
arXiv Detail & Related papers (2023-11-01T11:42:45Z) - Diffusion Models for Image Restoration and Enhancement -- A
Comprehensive Survey [96.99328714941657]
We present a comprehensive review of recent diffusion model-based methods on image restoration.
We classify and emphasize the innovative designs using diffusion models for both IR and blind/real-world IR.
We propose five potential and challenging directions for the future research of diffusion model-based IR.
arXiv Detail & Related papers (2023-08-18T08:40:38Z) - Low-Light Image Enhancement with Wavelet-based Diffusion Models [50.632343822790006]
Diffusion models have achieved promising results in image restoration tasks, yet suffer from time-consuming, excessive computational resource consumption, and unstable restoration.
We propose a robust and efficient Diffusion-based Low-Light image enhancement approach, dubbed DiffLL.
arXiv Detail & Related papers (2023-06-01T03:08:28Z) - Relightify: Relightable 3D Faces from a Single Image via Diffusion
Models [86.3927548091627]
We present the first approach to use diffusion models as a prior for highly accurate 3D facial BRDF reconstruction from a single image.
In contrast to existing methods, we directly acquire the observed texture from the input image, thus, resulting in more faithful and consistent estimation.
arXiv Detail & Related papers (2023-05-10T11:57:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.