Shadow Generation for Composite Image in Real-world Scenes
- URL: http://arxiv.org/abs/2104.10338v1
- Date: Wed, 21 Apr 2021 03:30:02 GMT
- Title: Shadow Generation for Composite Image in Real-world Scenes
- Authors: Yan Hong, Li Niu, Jianfu Zhang, Liqing Zhang
- Abstract summary: We propose a novel shadow generation network SGRNet, which consists of a shadow mask prediction stage and a shadow filling stage.
In the shadow mask prediction stage, foreground and background information are thoroughly interacted to generate foreground shadow mask.
In the shadow filling stage, shadow parameters are predicted to fill the shadow area.
- Score: 23.532079444113528
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image composition targets at inserting a foreground object on a background
image. Most previous image composition methods focus on adjusting the
foreground to make it compatible with background while ignoring the shadow
effect of foreground on the background. In this work, we focus on generating
plausible shadow for the foreground object in the composite image. First, we
contribute a real-world shadow generation dataset DESOBA by generating
synthetic composite images based on paired real images and deshadowed images.
Then, we propose a novel shadow generation network SGRNet, which consists of a
shadow mask prediction stage and a shadow filling stage. In the shadow mask
prediction stage, foreground and background information are thoroughly
interacted to generate foreground shadow mask. In the shadow filling stage,
shadow parameters are predicted to fill the shadow area. Extensive experiments
on our DESOBA dataset and real composite images demonstrate the effectiveness
of our proposed method.
Related papers
- Shadow Generation for Composite Image Using Diffusion model [16.316311264197324]
We resort to foundation model with rich prior knowledge of natural shadow images.
We first adapt ControlNet to our task and then propose intensity modulation modules to improve the shadow intensity.
Experimental results on both DESOBA and DESOBAv2 datasets as well as real composite images demonstrate the superior capability of our model for shadow generation task.
arXiv Detail & Related papers (2024-03-22T14:27:58Z) - DESOBAv2: Towards Large-scale Real-world Dataset for Shadow Generation [19.376935979734714]
In this work, we focus on generating plausible shadow for the inserted foreground object to make the composite image more realistic.
To supplement the existing small-scale dataset DESOBA, we create a large-scale dataset called DESOBAv2.
arXiv Detail & Related papers (2023-08-19T10:21:23Z) - SDDNet: Style-guided Dual-layer Disentanglement Network for Shadow
Detection [85.16141353762445]
We treat the input shadow image as a composition of a background layer and a shadow layer, and design a Style-guided Dual-layer Disentanglement Network to model these layers independently.
Our model effectively minimizes the detrimental effects of background color, yielding superior performance on three public datasets with a real-time inference speed of 32 FPS.
arXiv Detail & Related papers (2023-08-17T12:10:51Z) - Shadow Generation with Decomposed Mask Prediction and Attentive Shadow
Filling [26.780859992812186]
We focus on generating plausible shadows for the inserted foreground object to make the composite image more realistic.
To supplement the existing small-scale dataset, we create a large-scale dataset called RdSOBA with rendering techniques.
We design a two-stage network named DMASNet with mask prediction and attentive shadow filling.
arXiv Detail & Related papers (2023-06-30T01:32:16Z) - Neural Fields meet Explicit Geometric Representation for Inverse
Rendering of Urban Scenes [62.769186261245416]
We present a novel inverse rendering framework for large urban scenes capable of jointly reconstructing the scene geometry, spatially-varying materials, and HDR lighting from a set of posed RGB images with optional depth.
Specifically, we use a neural field to account for the primary rays, and use an explicit mesh (reconstructed from the underlying neural field) for modeling secondary rays that produce higher-order lighting effects such as cast shadows.
arXiv Detail & Related papers (2023-04-06T17:51:54Z) - Shadow Removal by High-Quality Shadow Synthesis [78.56549207362863]
HQSS employs a shadow feature encoder and a generator to synthesize pseudo images.
HQSS is observed to outperform the state-of-the-art methods on ISTD dataset, Video Shadow Removal dataset, and SRD dataset.
arXiv Detail & Related papers (2022-12-08T06:52:52Z) - Controllable Shadow Generation Using Pixel Height Maps [58.59256060452418]
Physics-based shadow rendering methods require 3D geometries, which are not always available.
Deep learning-based shadow synthesis methods learn a mapping from the light information to an object's shadow without explicitly modeling the shadow geometry.
We introduce pixel heigh, a novel geometry representation that encodes the correlations between objects, ground, and camera pose.
arXiv Detail & Related papers (2022-07-12T08:29:51Z) - Shadow-Aware Dynamic Convolution for Shadow Removal [80.82708225269684]
We introduce a novel Shadow-Aware Dynamic Convolution (SADC) module to decouple the interdependence between the shadow region and the non-shadow region.
Inspired by the fact that the color mapping of the non-shadow region is easier to learn, our SADC processes the non-shadow region with a lightweight convolution module.
We develop a novel intra-convolution distillation loss to strengthen the information flow from the non-shadow region to the shadow region.
arXiv Detail & Related papers (2022-05-10T14:00:48Z) - SIDNet: Learning Shading-aware Illumination Descriptor for Image
Harmonization [10.655037947250516]
Image harmonization aims at adjusting the appearance of the foreground to make it more compatible with the background.
We decompose the image harmonization task into two sub-problems: 1) illumination estimation of the background image and 2) re-rendering of foreground objects under background illumination.
arXiv Detail & Related papers (2021-12-02T15:18:29Z) - Making Images Real Again: A Comprehensive Survey on Deep Image Composition [34.09380539557308]
Image composition task could be into multiple sub-tasks, in which each sub-task targets at one or more issues.
In this paper, we conduct comprehensive survey over the sub-tasks and blending of image composition.
For each one, we summarize the existing methods, available datasets, and common evaluation metrics.
arXiv Detail & Related papers (2021-06-28T09:09:14Z) - Adversarial Image Composition with Auxiliary Illumination [53.89445873577062]
We propose an Adversarial Image Composition Net (AIC-Net) that achieves realistic image composition.
A novel branched generation mechanism is proposed, which disentangles the generation of shadows and the transfer of foreground styles.
Experiments on pedestrian and car composition tasks show that the proposed AIC-Net achieves superior composition performance.
arXiv Detail & Related papers (2020-09-17T12:58:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.