MetaShadow: Object-Centered Shadow Detection, Removal, and Synthesis
- URL: http://arxiv.org/abs/2412.02635v1
- Date: Tue, 03 Dec 2024 18:04:42 GMT
- Title: MetaShadow: Object-Centered Shadow Detection, Removal, and Synthesis
- Authors: Tianyu Wang, Jianming Zhang, Haitian Zheng, Zhihong Ding, Scott Cohen, Zhe Lin, Wei Xiong, Chi-Wing Fu, Luis Figueroa, Soo Ye Kim,
- Abstract summary: Shadows are often under-considered or even ignored in image editing applications, limiting the realism of the edited results.
In this paper, we introduce MetaShadow, a three-in-one versatile framework that enables detection, removal, and controllable synthesis of shadows in natural images in an object-centered fashion.
- Score: 64.00425120075045
- License:
- Abstract: Shadows are often under-considered or even ignored in image editing applications, limiting the realism of the edited results. In this paper, we introduce MetaShadow, a three-in-one versatile framework that enables detection, removal, and controllable synthesis of shadows in natural images in an object-centered fashion. MetaShadow combines the strengths of two cooperative components: Shadow Analyzer, for object-centered shadow detection and removal, and Shadow Synthesizer, for reference-based controllable shadow synthesis. Notably, we optimize the learning of the intermediate features from Shadow Analyzer to guide Shadow Synthesizer to generate more realistic shadows that blend seamlessly with the scene. Extensive evaluations on multiple shadow benchmark datasets show significant improvements of MetaShadow over the existing state-of-the-art methods on object-centered shadow detection, removal, and synthesis. MetaShadow excels in image-editing tasks such as object removal, relocation, and insertion, pushing the boundaries of object-centered image editing.
Related papers
- Shadow Removal Refinement via Material-Consistent Shadow Edges [33.8383848078524]
On both sides of shadow edges traversing regions with the same material, the original color and textures should be the same if the shadow is removed properly.
We fine-tune SAM, an image segmentation foundation model, to produce a shadow-invariant segmentation and then extract material-consistent shadow edges.
We demonstrate the effectiveness of our method in improving shadow removal results on more challenging, in-the-wild images.
arXiv Detail & Related papers (2024-09-10T20:16:28Z) - SwinShadow: Shifted Window for Ambiguous Adjacent Shadow Detection [90.4751446041017]
We present SwinShadow, a transformer-based architecture that fully utilizes the powerful shifted window mechanism for detecting adjacent shadows.
The whole process can be divided into three parts: encoder, decoder, and feature integration.
Experiments on three shadow detection benchmark datasets, SBU, UCF, and ISTD, demonstrate that our network achieves good performance in terms of balance error rate (BER)
arXiv Detail & Related papers (2024-08-07T03:16:33Z) - Progressive Recurrent Network for Shadow Removal [99.1928825224358]
Single-image shadow removal is a significant task that is still unresolved.
Most existing deep learning-based approaches attempt to remove the shadow directly, which can not deal with the shadow well.
We propose a simple but effective Progressive Recurrent Network (PRNet) to remove the shadow progressively.
arXiv Detail & Related papers (2023-11-01T11:42:45Z) - ShadowFormer: Global Context Helps Image Shadow Removal [41.742799378751364]
It is still challenging for the deep shadow removal model to exploit the global contextual correlation between shadow and non-shadow regions.
We first propose a Retinex-based shadow model, from which we derive a novel transformer-based network, dubbed ShandowFormer.
A multi-scale channel attention framework is employed to hierarchically capture the global information.
We propose a Shadow-Interaction Module (SIM) with Shadow-Interaction Attention (SIA) in the bottleneck stage to effectively model the context correlation between shadow and non-shadow regions.
arXiv Detail & Related papers (2023-02-03T10:54:52Z) - Structure-Informed Shadow Removal Networks [67.57092870994029]
Existing deep learning-based shadow removal methods still produce images with shadow remnants.
We propose a novel structure-informed shadow removal network (StructNet) to leverage the image-structure information to address the shadow remnant problem.
Our method outperforms existing shadow removal methods, and our StructNet can be integrated with existing methods to improve them further.
arXiv Detail & Related papers (2023-01-09T06:31:52Z) - Shadow Removal by High-Quality Shadow Synthesis [78.56549207362863]
HQSS employs a shadow feature encoder and a generator to synthesize pseudo images.
HQSS is observed to outperform the state-of-the-art methods on ISTD dataset, Video Shadow Removal dataset, and SRD dataset.
arXiv Detail & Related papers (2022-12-08T06:52:52Z) - ShaDocNet: Learning Spatial-Aware Tokens in Transformer for Document
Shadow Removal [53.01990632289937]
We propose a Transformer-based model for document shadow removal.
It uses shadow context encoding and decoding in both shadow and shadow-free regions.
arXiv Detail & Related papers (2022-11-30T01:46:29Z) - Learning from Synthetic Shadows for Shadow Detection and Removal [43.53464469097872]
Recent shadow removal approaches all train convolutional neural networks (CNN) on real paired shadow/shadow-free or shadow/shadow-free/mask image datasets.
We present SynShadow, a novel large-scale synthetic shadow/shadow-free/matte image triplets dataset and a pipeline to synthesize it.
arXiv Detail & Related papers (2021-01-05T18:56:34Z) - Physics-based Shadow Image Decomposition for Shadow Removal [36.41558227710456]
We propose a novel deep learning method for shadow removal.
Inspired by physical models of shadow formation, we use a linear illumination transformation to model the shadow effects in the image.
We train and test our framework on the most challenging shadow removal dataset.
arXiv Detail & Related papers (2020-12-23T23:06:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.