Auto-Exposure Fusion for Single-Image Shadow Removal
- URL: http://arxiv.org/abs/2103.01255v1
- Date: Mon, 1 Mar 2021 19:09:26 GMT
- Title: Auto-Exposure Fusion for Single-Image Shadow Removal
- Authors: Lan Fu, Changqing Zhou, Qing Guo, Felix Juefei-Xu, Hongkai Yu, Wei
Feng, Yang Liu, Song Wang
- Abstract summary: Shadow removal is still a challenging task due to its inherent background-dependent and spatial-variant properties.
Even powerful state-of-the-art deep neural networks could hardly recover traceless shadow-removed background.
This paper proposes a new solution by formulating it as an exposure fusion problem to address the challenges.
- Score: 23.178329688546032
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Shadow removal is still a challenging task due to its inherent
background-dependent and spatial-variant properties, leading to unknown and
diverse shadow patterns. Even powerful state-of-the-art deep neural networks
could hardly recover traceless shadow-removed background. This paper proposes a
new solution for this task by formulating it as an exposure fusion problem to
address the challenges. Intuitively, we can first estimate multiple
over-exposure images w.r.t. the input image to let the shadow regions in these
images have the same color with shadow-free areas in the input image. Then, we
fuse the original input with the over-exposure images to generate the final
shadow-free counterpart. Nevertheless, the spatial-variant property of the
shadow requires the fusion to be sufficiently `smart', that is, it should
automatically select proper over-exposure pixels from different images to make
the final output natural. To address this challenge, we propose the {\bf
shadow-aware FusionNet} that takes the shadow image as input to generate fusion
weight maps across all the over-exposure images. Moreover, we propose the {\bf
boundary-aware RefineNet} to eliminate the remaining shadow trace further. We
conduct extensive experiments on the ISTD, ISTD+, and SRD datasets to validate
our method's effectiveness and show better performance in shadow regions and
comparable performance in non-shadow regions over the state-of-the-art methods.
We release the model and code in
https://github.com/tsingqguo/exposure-fusion-shadow-removal.
Related papers
- Shadow Removal Refinement via Material-Consistent Shadow Edges [33.8383848078524]
On both sides of shadow edges traversing regions with the same material, the original color and textures should be the same if the shadow is removed properly.
We fine-tune SAM, an image segmentation foundation model, to produce a shadow-invariant segmentation and then extract material-consistent shadow edges.
We demonstrate the effectiveness of our method in improving shadow removal results on more challenging, in-the-wild images.
arXiv Detail & Related papers (2024-09-10T20:16:28Z) - Single-Image Shadow Removal Using Deep Learning: A Comprehensive Survey [78.84004293081631]
The patterns of shadows are arbitrary, varied, and often have highly complex trace structures.
The degradation caused by shadows is spatially non-uniform, resulting in inconsistencies in illumination and color between shadow and non-shadow areas.
Recent developments in this field are primarily driven by deep learning-based solutions.
arXiv Detail & Related papers (2024-07-11T20:58:38Z) - Leveraging Inpainting for Single-Image Shadow Removal [29.679542372017373]
In this work, we find that pretraining shadow removal networks on the image inpainting dataset can reduce the shadow remnants significantly.
A naive encoder-decoder network gets competitive restoration quality w.r.t. the state-of-the-art methods via only 10% shadow & shadow-free image pairs.
Inspired by these observations, we formulate shadow removal as an adaptive fusion task that takes advantage of both shadow removal and image inpainting.
arXiv Detail & Related papers (2023-02-10T16:21:07Z) - ShadowFormer: Global Context Helps Image Shadow Removal [41.742799378751364]
It is still challenging for the deep shadow removal model to exploit the global contextual correlation between shadow and non-shadow regions.
We first propose a Retinex-based shadow model, from which we derive a novel transformer-based network, dubbed ShandowFormer.
A multi-scale channel attention framework is employed to hierarchically capture the global information.
We propose a Shadow-Interaction Module (SIM) with Shadow-Interaction Attention (SIA) in the bottleneck stage to effectively model the context correlation between shadow and non-shadow regions.
arXiv Detail & Related papers (2023-02-03T10:54:52Z) - Shadow Removal by High-Quality Shadow Synthesis [78.56549207362863]
HQSS employs a shadow feature encoder and a generator to synthesize pseudo images.
HQSS is observed to outperform the state-of-the-art methods on ISTD dataset, Video Shadow Removal dataset, and SRD dataset.
arXiv Detail & Related papers (2022-12-08T06:52:52Z) - DeS3: Adaptive Attention-driven Self and Soft Shadow Removal using ViT Similarity [54.831083157152136]
We present a method that removes hard, soft and self shadows based on adaptive attention and ViT similarity.
Our method outperforms state-of-the-art methods on the SRD, AISTD, LRSS, USR and UIUC datasets.
arXiv Detail & Related papers (2022-11-15T12:15:29Z) - SpA-Former: Transformer image shadow detection and removal via spatial
attention [8.643096072885909]
We propose an end-to-end SpA-Former to recover a shadow-free image from a single shaded image.
Unlike traditional methods that require two steps for shadow detection and then shadow removal, the SpA-Former unifies these steps into one.
arXiv Detail & Related papers (2022-06-22T08:30:22Z) - Shadow-Aware Dynamic Convolution for Shadow Removal [80.82708225269684]
We introduce a novel Shadow-Aware Dynamic Convolution (SADC) module to decouple the interdependence between the shadow region and the non-shadow region.
Inspired by the fact that the color mapping of the non-shadow region is easier to learn, our SADC processes the non-shadow region with a lightweight convolution module.
We develop a novel intra-convolution distillation loss to strengthen the information flow from the non-shadow region to the shadow region.
arXiv Detail & Related papers (2022-05-10T14:00:48Z) - R2D: Learning Shadow Removal to Enhance Fine-Context Shadow Detection [64.10636296274168]
Current shadow detection methods perform poorly when detecting shadow regions that are small, unclear or have blurry edges.
We propose a new method called Restore to Detect (R2D), where a deep neural network is trained for restoration (shadow removal)
We show that our proposed method R2D improves the shadow detection performance while being able to detect fine context better compared to the other recent methods.
arXiv Detail & Related papers (2021-09-20T15:09:22Z) - Physics-based Shadow Image Decomposition for Shadow Removal [36.41558227710456]
We propose a novel deep learning method for shadow removal.
Inspired by physical models of shadow formation, we use a linear illumination transformation to model the shadow effects in the image.
We train and test our framework on the most challenging shadow removal dataset.
arXiv Detail & Related papers (2020-12-23T23:06:38Z) - Self-Supervised Shadow Removal [130.6657167667636]
We propose an unsupervised single image shadow removal solution via self-supervised learning by using a conditioned mask.
In contrast to existing literature, we do not require paired shadowed and shadow-free images, instead we rely on self-supervision and jointly learn deep models to remove and add shadows to images.
arXiv Detail & Related papers (2020-10-22T11:33:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.