SSN: Soft Shadow Network for Image Compositing
- URL: http://arxiv.org/abs/2007.08211v3
- Date: Thu, 1 Apr 2021 19:14:00 GMT
- Title: SSN: Soft Shadow Network for Image Compositing
- Authors: Yichen Sheng, Jianming Zhang, Bedrich Benes
- Abstract summary: We introduce an interactive Soft Shadow Network (SSN) to generates controllable soft shadows for image compositing.
SSN takes a 2D object mask as input and thus is agnostic to image types such as painting and vector art.
An environment light map is used to control the shadow's characteristics, such as angle and softness.
- Score: 26.606890595862826
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce an interactive Soft Shadow Network (SSN) to generates
controllable soft shadows for image compositing. SSN takes a 2D object mask as
input and thus is agnostic to image types such as painting and vector art. An
environment light map is used to control the shadow's characteristics, such as
angle and softness. SSN employs an Ambient Occlusion Prediction module to
predict an intermediate ambient occlusion map, which can be further refined by
the user to provides geometric cues to modulate the shadow generation. To train
our model, we design an efficient pipeline to produce diverse soft shadow
training data using 3D object models. In addition, we propose an inverse shadow
map representation to improve model training. We demonstrate that our model
produces realistic soft shadows in real-time. Our user studies show that the
generated shadows are often indistinguishable from shadows calculated by a
physics-based renderer and users can easily use SSN through an interactive
application to generate specific shadow effects in minutes.
Related papers
- SwinShadow: Shifted Window for Ambiguous Adjacent Shadow Detection [90.4751446041017]
We present SwinShadow, a transformer-based architecture that fully utilizes the powerful shifted window mechanism for detecting adjacent shadows.
The whole process can be divided into three parts: encoder, decoder, and feature integration.
Experiments on three shadow detection benchmark datasets, SBU, UCF, and ISTD, demonstrate that our network achieves good performance in terms of balance error rate (BER)
arXiv Detail & Related papers (2024-08-07T03:16:33Z) - SDDNet: Style-guided Dual-layer Disentanglement Network for Shadow
Detection [85.16141353762445]
We treat the input shadow image as a composition of a background layer and a shadow layer, and design a Style-guided Dual-layer Disentanglement Network to model these layers independently.
Our model effectively minimizes the detrimental effects of background color, yielding superior performance on three public datasets with a real-time inference speed of 32 FPS.
arXiv Detail & Related papers (2023-08-17T12:10:51Z) - SENS: Part-Aware Sketch-based Implicit Neural Shape Modeling [124.3266213819203]
We present SENS, a novel method for generating and editing 3D models from hand-drawn sketches.
S SENS analyzes the sketch and encodes its parts into ViT patch encoding.
S SENS supports refinement via part reconstruction, allowing for nuanced adjustments and artifact removal.
arXiv Detail & Related papers (2023-06-09T17:50:53Z) - Learning Physical-Spatio-Temporal Features for Video Shadow Removal [42.95422940263425]
We propose the first data-driven video shadow removal model, termedNet, by exploiting three essential characteristics of video shadows.
Specifically, dedicated physical branch was established to conduct local illumination estimation, which is more applicable for scenes with complex lighting textures.
To tackle the lack of datasets paired of shadow videos, we synthesize a dataset with aid of the popular game GTAV by controlling the switch of the shadow.
arXiv Detail & Related papers (2023-03-16T14:55:31Z) - Shadow Removal by High-Quality Shadow Synthesis [78.56549207362863]
HQSS employs a shadow feature encoder and a generator to synthesize pseudo images.
HQSS is observed to outperform the state-of-the-art methods on ISTD dataset, Video Shadow Removal dataset, and SRD dataset.
arXiv Detail & Related papers (2022-12-08T06:52:52Z) - Controllable Shadow Generation Using Pixel Height Maps [58.59256060452418]
Physics-based shadow rendering methods require 3D geometries, which are not always available.
Deep learning-based shadow synthesis methods learn a mapping from the light information to an object's shadow without explicitly modeling the shadow geometry.
We introduce pixel heigh, a novel geometry representation that encodes the correlations between objects, ground, and camera pose.
arXiv Detail & Related papers (2022-07-12T08:29:51Z) - Towards Learning Neural Representations from Shadows [11.60149896896201]
We present a method that learns neural scene representations from only shadows present in the scene.
Our framework is highly generalizable and can work alongside existing 3D reconstruction techniques.
arXiv Detail & Related papers (2022-03-29T23:13:41Z) - R2D: Learning Shadow Removal to Enhance Fine-Context Shadow Detection [64.10636296274168]
Current shadow detection methods perform poorly when detecting shadow regions that are small, unclear or have blurry edges.
We propose a new method called Restore to Detect (R2D), where a deep neural network is trained for restoration (shadow removal)
We show that our proposed method R2D improves the shadow detection performance while being able to detect fine context better compared to the other recent methods.
arXiv Detail & Related papers (2021-09-20T15:09:22Z) - Learning from Synthetic Shadows for Shadow Detection and Removal [43.53464469097872]
Recent shadow removal approaches all train convolutional neural networks (CNN) on real paired shadow/shadow-free or shadow/shadow-free/mask image datasets.
We present SynShadow, a novel large-scale synthetic shadow/shadow-free/matte image triplets dataset and a pipeline to synthesize it.
arXiv Detail & Related papers (2021-01-05T18:56:34Z) - Physics-based Shadow Image Decomposition for Shadow Removal [36.41558227710456]
We propose a novel deep learning method for shadow removal.
Inspired by physical models of shadow formation, we use a linear illumination transformation to model the shadow effects in the image.
We train and test our framework on the most challenging shadow removal dataset.
arXiv Detail & Related papers (2020-12-23T23:06:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.