CRFormer: A Cross-Region Transformer for Shadow Removal
- URL: http://arxiv.org/abs/2207.01600v1
- Date: Mon, 4 Jul 2022 17:33:02 GMT
- Title: CRFormer: A Cross-Region Transformer for Shadow Removal
- Authors: Jin Wan and Hui Yin and Zhenyao Wu and Xinyi Wu and Zhihao Liu and
Song Wang
- Abstract summary: We propose a novel cross-region transformer, namely CRFormer, for shadow removal.
This is achieved by a carefully designed region-aware cross-attention operation.
Experiments on ISTD, AISTD, SRD, and Video Shadow Removal datasets demonstrate the superiority of our method.
- Score: 27.67680052355886
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Aiming to restore the original intensity of shadow regions in an image and
make them compatible with the remaining non-shadow regions without a trace,
shadow removal is a very challenging problem that benefits many downstream
image/video-related tasks. Recently, transformers have shown their strong
capability in various applications by capturing global pixel interactions and
this capability is highly desirable in shadow removal. However, applying
transformers to promote shadow removal is non-trivial for the following two
reasons: 1) The patchify operation is not suitable for shadow removal due to
irregular shadow shapes; 2) shadow removal only needs one-way interaction from
the non-shadow region to the shadow region instead of the common two-way
interactions among all pixels in the image. In this paper, we propose a novel
cross-region transformer, namely CRFormer, for shadow removal which differs
from existing transformers by only considering the pixel interactions from the
non-shadow region to the shadow region without splitting images into patches.
This is achieved by a carefully designed region-aware cross-attention operation
that can aggregate the recovered shadow region features conditioned on the
non-shadow region features. Extensive experiments on ISTD, AISTD, SRD, and
Video Shadow Removal datasets demonstrate the superiority of our method
compared to other state-of-the-art methods.
Related papers
- Single-Image Shadow Removal Using Deep Learning: A Comprehensive Survey [78.84004293081631]
The patterns of shadows are arbitrary, varied, and often have highly complex trace structures.
The degradation caused by shadows is spatially non-uniform, resulting in inconsistencies in illumination and color between shadow and non-shadow areas.
Recent developments in this field are primarily driven by deep learning-based solutions.
arXiv Detail & Related papers (2024-07-11T20:58:38Z) - Cross-Modal Spherical Aggregation for Weakly Supervised Remote Sensing Shadow Removal [22.4845448174729]
We propose a weakly supervised shadow removal network with a spherical feature space, dubbed S2-ShadowNet, to explore the best of both worlds for visible and infrared modalities.
Specifically, we employ a modal translation (visible-to-infrared) model to learn the cross-domain mapping, thus generating realistic infrared samples.
We contribute a large-scale weakly supervised shadow removal benchmark, including 4000 shadow images with corresponding shadow masks.
arXiv Detail & Related papers (2024-06-25T11:14:09Z) - Learning Restoration is Not Enough: Transfering Identical Mapping for
Single-Image Shadow Removal [19.391619888009064]
State-of-the-art shadow removal methods train deep neural networks on collected shadow & shadow-free image pairs.
We find that two tasks exhibit poor compatibility, and using shared weights for these two tasks could lead to the model being optimized towards only one task.
We propose to handle these two tasks separately and leverage the identical mapping results to guide the shadow restoration in an iterative manner.
arXiv Detail & Related papers (2023-05-18T01:36:23Z) - ShadowFormer: Global Context Helps Image Shadow Removal [41.742799378751364]
It is still challenging for the deep shadow removal model to exploit the global contextual correlation between shadow and non-shadow regions.
We first propose a Retinex-based shadow model, from which we derive a novel transformer-based network, dubbed ShandowFormer.
A multi-scale channel attention framework is employed to hierarchically capture the global information.
We propose a Shadow-Interaction Module (SIM) with Shadow-Interaction Attention (SIA) in the bottleneck stage to effectively model the context correlation between shadow and non-shadow regions.
arXiv Detail & Related papers (2023-02-03T10:54:52Z) - Shadow Removal by High-Quality Shadow Synthesis [78.56549207362863]
HQSS employs a shadow feature encoder and a generator to synthesize pseudo images.
HQSS is observed to outperform the state-of-the-art methods on ISTD dataset, Video Shadow Removal dataset, and SRD dataset.
arXiv Detail & Related papers (2022-12-08T06:52:52Z) - ShaDocNet: Learning Spatial-Aware Tokens in Transformer for Document
Shadow Removal [53.01990632289937]
We propose a Transformer-based model for document shadow removal.
It uses shadow context encoding and decoding in both shadow and shadow-free regions.
arXiv Detail & Related papers (2022-11-30T01:46:29Z) - DeS3: Adaptive Attention-driven Self and Soft Shadow Removal using ViT Similarity [54.831083157152136]
We present a method that removes hard, soft and self shadows based on adaptive attention and ViT similarity.
Our method outperforms state-of-the-art methods on the SRD, AISTD, LRSS, USR and UIUC datasets.
arXiv Detail & Related papers (2022-11-15T12:15:29Z) - SpA-Former: Transformer image shadow detection and removal via spatial
attention [8.643096072885909]
We propose an end-to-end SpA-Former to recover a shadow-free image from a single shaded image.
Unlike traditional methods that require two steps for shadow detection and then shadow removal, the SpA-Former unifies these steps into one.
arXiv Detail & Related papers (2022-06-22T08:30:22Z) - Shadow-Aware Dynamic Convolution for Shadow Removal [80.82708225269684]
We introduce a novel Shadow-Aware Dynamic Convolution (SADC) module to decouple the interdependence between the shadow region and the non-shadow region.
Inspired by the fact that the color mapping of the non-shadow region is easier to learn, our SADC processes the non-shadow region with a lightweight convolution module.
We develop a novel intra-convolution distillation loss to strengthen the information flow from the non-shadow region to the shadow region.
arXiv Detail & Related papers (2022-05-10T14:00:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.