From Shadow Generation to Shadow Removal
- URL: http://arxiv.org/abs/2103.12997v1
- Date: Wed, 24 Mar 2021 05:49:08 GMT
- Title: From Shadow Generation to Shadow Removal
- Authors: Zhihao Liu, Hui Yin, Xinyi Wu, Zhenyao Wu, Yang Mi, Song Wang
- Abstract summary: We propose a new G2R-ShadowNet which leverages shadow generation for weakly-supervised shadow removal.
The proposed G2R-ShadowNet consists of three sub-networks for shadow generation, shadow removal and refinement.
In particular, the shadow generation sub-net stylises non-shadow regions to be shadow ones, leading to paired data for training the shadow-removal sub-net.
- Score: 19.486543304598264
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Shadow removal is a computer-vision task that aims to restore the image
content in shadow regions. While almost all recent shadow-removal methods
require shadow-free images for training, in ECCV 2020 Le and Samaras introduces
an innovative approach without this requirement by cropping patches with and
without shadows from shadow images as training samples. However, it is still
laborious and time-consuming to construct a large amount of such unpaired
patches. In this paper, we propose a new G2R-ShadowNet which leverages shadow
generation for weakly-supervised shadow removal by only using a set of shadow
images and their corresponding shadow masks for training. The proposed
G2R-ShadowNet consists of three sub-networks for shadow generation, shadow
removal and refinement, respectively and they are jointly trained in an
end-to-end fashion. In particular, the shadow generation sub-net stylises
non-shadow regions to be shadow ones, leading to paired data for training the
shadow-removal sub-net. Extensive experiments on the ISTD dataset and the Video
Shadow Removal dataset show that the proposed G2R-ShadowNet achieves
competitive performances against the current state of the arts and outperforms
Le and Samaras' patch-based shadow-removal method.
Related papers
- Progressive Recurrent Network for Shadow Removal [99.1928825224358]
Single-image shadow removal is a significant task that is still unresolved.
Most existing deep learning-based approaches attempt to remove the shadow directly, which can not deal with the shadow well.
We propose a simple but effective Progressive Recurrent Network (PRNet) to remove the shadow progressively.
arXiv Detail & Related papers (2023-11-01T11:42:45Z) - Shadow Removal by High-Quality Shadow Synthesis [78.56549207362863]
HQSS employs a shadow feature encoder and a generator to synthesize pseudo images.
HQSS is observed to outperform the state-of-the-art methods on ISTD dataset, Video Shadow Removal dataset, and SRD dataset.
arXiv Detail & Related papers (2022-12-08T06:52:52Z) - ShaDocNet: Learning Spatial-Aware Tokens in Transformer for Document
Shadow Removal [53.01990632289937]
We propose a Transformer-based model for document shadow removal.
It uses shadow context encoding and decoding in both shadow and shadow-free regions.
arXiv Detail & Related papers (2022-11-30T01:46:29Z) - DeS3: Adaptive Attention-driven Self and Soft Shadow Removal using ViT Similarity [54.831083157152136]
We present a method that removes hard, soft and self shadows based on adaptive attention and ViT similarity.
Our method outperforms state-of-the-art methods on the SRD, AISTD, LRSS, USR and UIUC datasets.
arXiv Detail & Related papers (2022-11-15T12:15:29Z) - DC-ShadowNet: Single-Image Hard and Soft Shadow Removal Using
Unsupervised Domain-Classifier Guided Network [28.6541488555978]
We propose an unsupervised domain-classifier guided shadow removal network, DC-ShadowNet.
We introduce novel losses based on physics-based shadow-free chromaticity, shadow-robust perceptual features, and boundary smoothness.
Our experiments show that all these novel components allow our method to handle soft shadows, and also to perform better on hard shadows.
arXiv Detail & Related papers (2022-07-21T12:04:16Z) - UnShadowNet: Illumination Critic Guided Contrastive Learning For Shadow
Removal [14.898039056038789]
We introduce a novel weakly supervised shadow removal framework UnShadowNet.
It is composed of a DeShadower network responsible for the removal of the extracted shadow under the guidance of an Illumination network.
We show that UnShadowNet can be easily extended to a fully-supervised set-up to exploit the ground-truth when available.
arXiv Detail & Related papers (2022-03-29T11:17:02Z) - Learning from Synthetic Shadows for Shadow Detection and Removal [43.53464469097872]
Recent shadow removal approaches all train convolutional neural networks (CNN) on real paired shadow/shadow-free or shadow/shadow-free/mask image datasets.
We present SynShadow, a novel large-scale synthetic shadow/shadow-free/matte image triplets dataset and a pipeline to synthesize it.
arXiv Detail & Related papers (2021-01-05T18:56:34Z) - Physics-based Shadow Image Decomposition for Shadow Removal [36.41558227710456]
We propose a novel deep learning method for shadow removal.
Inspired by physical models of shadow formation, we use a linear illumination transformation to model the shadow effects in the image.
We train and test our framework on the most challenging shadow removal dataset.
arXiv Detail & Related papers (2020-12-23T23:06:38Z) - Self-Supervised Shadow Removal [130.6657167667636]
We propose an unsupervised single image shadow removal solution via self-supervised learning by using a conditioned mask.
In contrast to existing literature, we do not require paired shadowed and shadow-free images, instead we rely on self-supervision and jointly learn deep models to remove and add shadows to images.
arXiv Detail & Related papers (2020-10-22T11:33:41Z) - From Shadow Segmentation to Shadow Removal [34.762493656937366]
The requirement for paired shadow and shadow-free images limits the size and diversity of shadow removal datasets.
We propose a shadow removal method that can be trained using only shadow and non-shadow patches cropped from the shadow images themselves.
arXiv Detail & Related papers (2020-08-01T14:00:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.