Learning Physical-Spatio-Temporal Features for Video Shadow Removal
- URL: http://arxiv.org/abs/2303.09370v1
- Date: Thu, 16 Mar 2023 14:55:31 GMT
- Title: Learning Physical-Spatio-Temporal Features for Video Shadow Removal
- Authors: Zhihao Chen, Liang Wan, Yefan Xiao, Lei Zhu, Huazhu Fu
- Abstract summary: We propose the first data-driven video shadow removal model, termedNet, by exploiting three essential characteristics of video shadows.
Specifically, dedicated physical branch was established to conduct local illumination estimation, which is more applicable for scenes with complex lighting textures.
To tackle the lack of datasets paired of shadow videos, we synthesize a dataset with aid of the popular game GTAV by controlling the switch of the shadow.
- Score: 42.95422940263425
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Shadow removal in a single image has received increasing attention in recent
years. However, removing shadows over dynamic scenes remains largely
under-explored. In this paper, we propose the first data-driven video shadow
removal model, termed PSTNet, by exploiting three essential characteristics of
video shadows, i.e., physical property, spatio relation, and temporal
coherence. Specifically, a dedicated physical branch was established to conduct
local illumination estimation, which is more applicable for scenes with complex
lighting and textures, and then enhance the physical features via a mask-guided
attention strategy. Then, we develop a progressive aggregation module to
enhance the spatio and temporal characteristics of features maps, and
effectively integrate the three kinds of features. Furthermore, to tackle the
lack of datasets of paired shadow videos, we synthesize a dataset (SVSRD-85)
with aid of the popular game GTAV by controlling the switch of the shadow
renderer. Experiments against 9 state-of-the-art models, including image shadow
removers and image/video restoration methods, show that our method improves the
best SOTA in terms of RMSE error for the shadow area by 14.7. In addition, we
develop a lightweight model adaptation strategy to make our synthetic-driven
model effective in real world scenes. The visual comparison on the public
SBU-TimeLapse dataset verifies the generalization ability of our model in real
scenes.
Related papers
- Regional Attention for Shadow Removal [10.575174563308046]
This work devises a lightweight yet accurate shadow removal framework.
We analyze the characteristics of the shadow removal task and design a novel regional attention mechanism.
Unlike existing attention-based models, our regional attention strategy allows each shadow region to interact more rationally with its surrounding non-shadow areas.
arXiv Detail & Related papers (2024-11-21T15:10:44Z) - RelitLRM: Generative Relightable Radiance for Large Reconstruction Models [52.672706620003765]
We propose RelitLRM for generating high-quality Gaussian splatting representations of 3D objects under novel illuminations.
Unlike prior inverse rendering methods requiring dense captures and slow optimization, RelitLRM adopts a feed-forward transformer-based model.
We show our sparse-view feed-forward RelitLRM offers competitive relighting results to state-of-the-art dense-view optimization-based baselines.
arXiv Detail & Related papers (2024-10-08T17:40:01Z) - Soft-Hard Attention U-Net Model and Benchmark Dataset for Multiscale Image Shadow Removal [2.999888908665659]
This study proposes a novel deep learning architecture, named Soft-Hard Attention U-net (SHAU), focusing on multiscale shadow removal.
It provides a novel synthetic dataset, named Multiscale Shadow Removal dataset (MSRD), containing complex shadow patterns of multiple scales.
The results demonstrate the effectiveness of SHAU over the relevant state-of-the-art shadow removal methods across various benchmark datasets.
arXiv Detail & Related papers (2024-08-07T12:42:06Z) - Deshadow-Anything: When Segment Anything Model Meets Zero-shot shadow
removal [8.555176637147648]
We develop Deshadow-Anything, considering the generalization of large-scale datasets, to achieve image shadow removal.
The diffusion model can diffuse along the edges and textures of an image, helping to remove shadows while preserving the details of the image.
Experiments on shadow removal tasks demonstrate that these methods can effectively improve image restoration performance.
arXiv Detail & Related papers (2023-09-21T01:35:13Z) - SDDNet: Style-guided Dual-layer Disentanglement Network for Shadow
Detection [85.16141353762445]
We treat the input shadow image as a composition of a background layer and a shadow layer, and design a Style-guided Dual-layer Disentanglement Network to model these layers independently.
Our model effectively minimizes the detrimental effects of background color, yielding superior performance on three public datasets with a real-time inference speed of 32 FPS.
arXiv Detail & Related papers (2023-08-17T12:10:51Z) - Differentiable Blocks World: Qualitative 3D Decomposition by Rendering
Primitives [70.32817882783608]
We present an approach that produces a simple, compact, and actionable 3D world representation by means of 3D primitives.
Unlike existing primitive decomposition methods that rely on 3D input data, our approach operates directly on images.
We show that the resulting textured primitives faithfully reconstruct the input images and accurately model the visible 3D points.
arXiv Detail & Related papers (2023-07-11T17:58:31Z) - Structure-Informed Shadow Removal Networks [67.57092870994029]
Existing deep learning-based shadow removal methods still produce images with shadow remnants.
We propose a novel structure-informed shadow removal network (StructNet) to leverage the image-structure information to address the shadow remnant problem.
Our method outperforms existing shadow removal methods, and our StructNet can be integrated with existing methods to improve them further.
arXiv Detail & Related papers (2023-01-09T06:31:52Z) - Shadow-Aware Dynamic Convolution for Shadow Removal [80.82708225269684]
We introduce a novel Shadow-Aware Dynamic Convolution (SADC) module to decouple the interdependence between the shadow region and the non-shadow region.
Inspired by the fact that the color mapping of the non-shadow region is easier to learn, our SADC processes the non-shadow region with a lightweight convolution module.
We develop a novel intra-convolution distillation loss to strengthen the information flow from the non-shadow region to the shadow region.
arXiv Detail & Related papers (2022-05-10T14:00:48Z) - SSN: Soft Shadow Network for Image Compositing [26.606890595862826]
We introduce an interactive Soft Shadow Network (SSN) to generates controllable soft shadows for image compositing.
SSN takes a 2D object mask as input and thus is agnostic to image types such as painting and vector art.
An environment light map is used to control the shadow's characteristics, such as angle and softness.
arXiv Detail & Related papers (2020-07-16T09:36:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.