SDDNet: Style-guided Dual-layer Disentanglement Network for Shadow
Detection
- URL: http://arxiv.org/abs/2308.08935v1
- Date: Thu, 17 Aug 2023 12:10:51 GMT
- Title: SDDNet: Style-guided Dual-layer Disentanglement Network for Shadow
Detection
- Authors: Runmin Cong, Yuchen Guan, Jinpeng Chen, Wei Zhang, Yao Zhao, and Sam
Kwong
- Abstract summary: We treat the input shadow image as a composition of a background layer and a shadow layer, and design a Style-guided Dual-layer Disentanglement Network to model these layers independently.
Our model effectively minimizes the detrimental effects of background color, yielding superior performance on three public datasets with a real-time inference speed of 32 FPS.
- Score: 85.16141353762445
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Despite significant progress in shadow detection, current methods still
struggle with the adverse impact of background color, which may lead to errors
when shadows are present on complex backgrounds. Drawing inspiration from the
human visual system, we treat the input shadow image as a composition of a
background layer and a shadow layer, and design a Style-guided Dual-layer
Disentanglement Network (SDDNet) to model these layers independently. To
achieve this, we devise a Feature Separation and Recombination (FSR) module
that decomposes multi-level features into shadow-related and background-related
components by offering specialized supervision for each component, while
preserving information integrity and avoiding redundancy through the
reconstruction constraint. Moreover, we propose a Shadow Style Filter (SSF)
module to guide the feature disentanglement by focusing on style
differentiation and uniformization. With these two modules and our overall
pipeline, our model effectively minimizes the detrimental effects of background
color, yielding superior performance on three public datasets with a real-time
inference speed of 32 FPS.
Related papers
- Diff-Shadow: Global-guided Diffusion Model for Shadow Removal [46.41983327564438]
We propose Diff-Shadow, a global-guided diffusion model for high-quality shadow removal.
Our method achieves a significant improvement in terms of PSNR, increasing from 32.33dB to 33.69dB on the SRD dataset.
arXiv Detail & Related papers (2024-07-23T06:42:55Z) - Single-Image Shadow Removal Using Deep Learning: A Comprehensive Survey [77.17812978009738]
The patterns of shadows are arbitrary, varied, and often have highly complex trace structures.
The degradation caused by shadows is spatially non-uniform, resulting in inconsistencies in illumination and color between shadow and non-shadow areas.
Recent developments in this field are primarily driven by deep learning-based solutions.
arXiv Detail & Related papers (2024-07-11T20:58:38Z) - Cross-Modal Spherical Aggregation for Weakly Supervised Remote Sensing Shadow Removal [22.4845448174729]
We propose a weakly supervised shadow removal network with a spherical feature space, dubbed S2-ShadowNet, to explore the best of both worlds for visible and infrared modalities.
Specifically, we employ a modal translation (visible-to-infrared) model to learn the cross-domain mapping, thus generating realistic infrared samples.
We contribute a large-scale weakly supervised shadow removal benchmark, including 4000 shadow images with corresponding shadow masks.
arXiv Detail & Related papers (2024-06-25T11:14:09Z) - Progressive Recurrent Network for Shadow Removal [99.1928825224358]
Single-image shadow removal is a significant task that is still unresolved.
Most existing deep learning-based approaches attempt to remove the shadow directly, which can not deal with the shadow well.
We propose a simple but effective Progressive Recurrent Network (PRNet) to remove the shadow progressively.
arXiv Detail & Related papers (2023-11-01T11:42:45Z) - Learning Physical-Spatio-Temporal Features for Video Shadow Removal [42.95422940263425]
We propose the first data-driven video shadow removal model, termedNet, by exploiting three essential characteristics of video shadows.
Specifically, dedicated physical branch was established to conduct local illumination estimation, which is more applicable for scenes with complex lighting textures.
To tackle the lack of datasets paired of shadow videos, we synthesize a dataset with aid of the popular game GTAV by controlling the switch of the shadow.
arXiv Detail & Related papers (2023-03-16T14:55:31Z) - Structure-Informed Shadow Removal Networks [67.57092870994029]
Existing deep learning-based shadow removal methods still produce images with shadow remnants.
We propose a novel structure-informed shadow removal network (StructNet) to leverage the image-structure information to address the shadow remnant problem.
Our method outperforms existing shadow removal methods, and our StructNet can be integrated with existing methods to improve them further.
arXiv Detail & Related papers (2023-01-09T06:31:52Z) - LAB-Net: LAB Color-Space Oriented Lightweight Network for Shadow Removal [82.15476792337529]
We present a novel lightweight deep neural network that processes shadow images in the LAB color space.
The proposed network termed "LAB-Net", is motivated by the following three observations.
Experimental results show that our LAB-Net well outperforms state-of-the-art methods.
arXiv Detail & Related papers (2022-08-27T15:34:15Z) - SLIDE: Single Image 3D Photography with Soft Layering and Depth-aware
Inpainting [54.419266357283966]
Single image 3D photography enables viewers to view a still image from novel viewpoints.
Recent approaches combine monocular depth networks with inpainting networks to achieve compelling results.
We present SLIDE, a modular and unified system for single image 3D photography.
arXiv Detail & Related papers (2021-09-02T16:37:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.