Structure-Guided Diffusion Models for High-Fidelity Portrait Shadow Removal
- URL: http://arxiv.org/abs/2507.04692v2
- Date: Mon, 14 Jul 2025 16:47:15 GMT
- Title: Structure-Guided Diffusion Models for High-Fidelity Portrait Shadow Removal
- Authors: Wanchang Yu, Qing Zhang, Rongjia Zheng, Wei-Shi Zheng,
- Abstract summary: We present a diffusion-based portrait shadow removal approach that can robustly produce high-fidelity results.<n>We first train a shadow-independent structure extraction network on a real-world portrait dataset with various synthetic lighting conditions.<n>The structure map is then used as condition to train a structure-guided inpainting diffusion model for removing shadows in a generative manner.
- Score: 34.35752953614944
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a diffusion-based portrait shadow removal approach that can robustly produce high-fidelity results. Unlike previous methods, we cast shadow removal as diffusion-based inpainting. To this end, we first train a shadow-independent structure extraction network on a real-world portrait dataset with various synthetic lighting conditions, which allows to generate a shadow-independent structure map including facial details while excluding the unwanted shadow boundaries. The structure map is then used as condition to train a structure-guided inpainting diffusion model for removing shadows in a generative manner. Finally, to restore the fine-scale details (e.g., eyelashes, moles and spots) that may not be captured by the structure map, we take the gradients inside the shadow regions as guidance and train a detail restoration diffusion model to refine the shadow removal result. Extensive experiments on the benchmark datasets show that our method clearly outperforms existing methods, and is effective to avoid previously common issues such as facial identity tampering, shadow residual, color distortion, structure blurring, and loss of details. Our code is available at https://github.com/wanchang-yu/Structure-Guided-Diffusion-for-Portrait-Shadow-Removal.
Related papers
- DocShaDiffusion: Diffusion Model in Latent Space for Document Image Shadow Removal [61.375359734723716]
Existing methods tend to remove shadows with constant color background and ignore color shadows.<n>In this paper, we first design a diffusion model in latent space for document image shadow removal, called DocShaDiffusion.<n>To address the issue of color shadows, we design a shadow soft-mask generation module (SSGM)<n>A shadow mask-aware guided diffusion module (SMGDM) is proposed to remove shadows from document images by supervising the diffusion and denoising process.
arXiv Detail & Related papers (2025-07-02T07:22:09Z) - MetaShadow: Object-Centered Shadow Detection, Removal, and Synthesis [64.00425120075045]
Shadows are often under-considered or even ignored in image editing applications, limiting the realism of the edited results.<n>In this paper, we introduce MetaShadow, a three-in-one versatile framework that enables detection, removal, and controllable synthesis of shadows in natural images in an object-centered fashion.
arXiv Detail & Related papers (2024-12-03T18:04:42Z) - Generative Portrait Shadow Removal [27.98144439007323]
We introduce a high-fidelity portrait shadow removal model that can effectively enhance the image of a portrait.
Our method also demonstrates robustness to diverse subjects captured in real environments.
arXiv Detail & Related papers (2024-10-07T22:09:22Z) - Shadow Removal Refinement via Material-Consistent Shadow Edges [33.8383848078524]
On both sides of shadow edges traversing regions with the same material, the original color and textures should be the same if the shadow is removed properly.
We fine-tune SAM, an image segmentation foundation model, to produce a shadow-invariant segmentation and then extract material-consistent shadow edges.
We demonstrate the effectiveness of our method in improving shadow removal results on more challenging, in-the-wild images.
arXiv Detail & Related papers (2024-09-10T20:16:28Z) - Single-Image Shadow Removal Using Deep Learning: A Comprehensive Survey [78.84004293081631]
The patterns of shadows are arbitrary, varied, and often have highly complex trace structures.
The degradation caused by shadows is spatially non-uniform, resulting in inconsistencies in illumination and color between shadow and non-shadow areas.
Recent developments in this field are primarily driven by deep learning-based solutions.
arXiv Detail & Related papers (2024-07-11T20:58:38Z) - Cross-Modal Spherical Aggregation for Weakly Supervised Remote Sensing Shadow Removal [22.4845448174729]
We propose a weakly supervised shadow removal network with a spherical feature space, dubbed S2-ShadowNet, to explore the best of both worlds for visible and infrared modalities.
Specifically, we employ a modal translation (visible-to-infrared) model to learn the cross-domain mapping, thus generating realistic infrared samples.
We contribute a large-scale weakly supervised shadow removal benchmark, including 4000 shadow images with corresponding shadow masks.
arXiv Detail & Related papers (2024-06-25T11:14:09Z) - Latent Feature-Guided Diffusion Models for Shadow Removal [47.21387783721207]
We propose the use of diffusion models as they offer a promising approach to gradually refine the details of shadow regions during the diffusion process.<n>Our method improves this process by conditioning on a learned latent feature space that inherits the characteristics of shadow-free images.<n>We demonstrate the effectiveness of our approach which outperforms the previous best method by 13% in terms of RMSE on the AISTD dataset.
arXiv Detail & Related papers (2023-12-04T18:59:55Z) - Structure-Informed Shadow Removal Networks [67.57092870994029]
Existing deep learning-based shadow removal methods still produce images with shadow remnants.
We propose a novel structure-informed shadow removal network (StructNet) to leverage the image-structure information to address the shadow remnant problem.
Our method outperforms existing shadow removal methods, and our StructNet can be integrated with existing methods to improve them further.
arXiv Detail & Related papers (2023-01-09T06:31:52Z) - ShadowDiffusion: When Degradation Prior Meets Diffusion Model for Shadow
Removal [74.86415440438051]
We propose a unified diffusion framework that integrates both the image and degradation priors for highly effective shadow removal.
Our model achieves a significant improvement in terms of PSNR, increasing from 31.69dB to 34.73dB over SRD dataset.
arXiv Detail & Related papers (2022-12-09T07:48:30Z) - Shadow Removal by High-Quality Shadow Synthesis [78.56549207362863]
HQSS employs a shadow feature encoder and a generator to synthesize pseudo images.
HQSS is observed to outperform the state-of-the-art methods on ISTD dataset, Video Shadow Removal dataset, and SRD dataset.
arXiv Detail & Related papers (2022-12-08T06:52:52Z) - DeS3: Adaptive Attention-driven Self and Soft Shadow Removal using ViT Similarity [54.831083157152136]
We present a method that removes hard, soft and self shadows based on adaptive attention and ViT similarity.
Our method outperforms state-of-the-art methods on the SRD, AISTD, LRSS, USR and UIUC datasets.
arXiv Detail & Related papers (2022-11-15T12:15:29Z) - Physics-based Shadow Image Decomposition for Shadow Removal [36.41558227710456]
We propose a novel deep learning method for shadow removal.
Inspired by physical models of shadow formation, we use a linear illumination transformation to model the shadow effects in the image.
We train and test our framework on the most challenging shadow removal dataset.
arXiv Detail & Related papers (2020-12-23T23:06:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.