Recasting Regional Lighting for Shadow Removal
- URL: http://arxiv.org/abs/2402.00341v1
- Date: Thu, 1 Feb 2024 05:08:39 GMT
- Title: Recasting Regional Lighting for Shadow Removal
- Authors: Yuhao Liu, Zhanghan Ke, Ke Xu, Fang Liu, Zhenwei Wang, Rynson W.H. Lau
- Abstract summary: In a shadow region, the degradation degree of object textures depends on the local illumination.
We propose a shadow-aware decomposition network to estimate the illumination and reflectance layers of shadow regions.
We then propose a novel bilateral correction network to recast the lighting of shadow regions in the illumination layer.
- Score: 41.107191352835315
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Removing shadows requires an understanding of both lighting conditions and
object textures in a scene. Existing methods typically learn pixel-level color
mappings between shadow and non-shadow images, in which the joint modeling of
lighting and object textures is implicit and inadequate. We observe that in a
shadow region, the degradation degree of object textures depends on the local
illumination, while simply enhancing the local illumination cannot fully
recover the attenuated textures. Based on this observation, we propose to
condition the restoration of attenuated textures on the corrected local
lighting in the shadow region. Specifically, We first design a shadow-aware
decomposition network to estimate the illumination and reflectance layers of
shadow regions explicitly. We then propose a novel bilateral correction network
to recast the lighting of shadow regions in the illumination layer via a novel
local lighting correction module, and to restore the textures conditioned on
the corrected illumination layer via a novel illumination-guided texture
restoration module. We further annotate pixel-wise shadow masks for the public
SRD dataset, which originally contains only image pairs. Experiments on three
benchmarks show that our method outperforms existing state-of-the-art shadow
removal methods.
Related papers
- OmniSR: Shadow Removal under Direct and Indirect Lighting [16.90413085184936]
A significant challenge in removing shadows from indirect illumination is obtaining shadow-free images to train the shadow removal network.
We propose a novel rendering pipeline for generating shadowed and shadow-free images under direct and indirect illumination.
We also propose an innovative shadow removal network that explicitly integrates semantic and geometric priors through concatenation and attention mechanisms.
arXiv Detail & Related papers (2024-10-02T16:30:10Z) - COMPOSE: Comprehensive Portrait Shadow Editing [25.727386174616868]
COMPOSE is a novel shadow editing pipeline for human portraits.
It offers precise control over shadow attributes such as shape, intensity, and position.
We have trained models to: (1) predict this light source representation from images, and (2) generate realistic shadows using this representation.
arXiv Detail & Related papers (2024-08-25T19:18:18Z) - SIRe-IR: Inverse Rendering for BRDF Reconstruction with Shadow and
Illumination Removal in High-Illuminance Scenes [51.50157919750782]
We present SIRe-IR, an implicit neural rendering inverse approach that decomposes the scene into environment map, albedo, and roughness.
By accurately modeling the indirect radiance field, normal, visibility, and direct light simultaneously, we are able to remove both shadows and indirect illumination.
Even in the presence of intense illumination, our method recovers high-quality albedo and roughness with no shadow interference.
arXiv Detail & Related papers (2023-10-19T10:44:23Z) - Neural Fields meet Explicit Geometric Representation for Inverse
Rendering of Urban Scenes [62.769186261245416]
We present a novel inverse rendering framework for large urban scenes capable of jointly reconstructing the scene geometry, spatially-varying materials, and HDR lighting from a set of posed RGB images with optional depth.
Specifically, we use a neural field to account for the primary rays, and use an explicit mesh (reconstructed from the underlying neural field) for modeling secondary rays that produce higher-order lighting effects such as cast shadows.
arXiv Detail & Related papers (2023-04-06T17:51:54Z) - Structure-Informed Shadow Removal Networks [67.57092870994029]
Existing deep learning-based shadow removal methods still produce images with shadow remnants.
We propose a novel structure-informed shadow removal network (StructNet) to leverage the image-structure information to address the shadow remnant problem.
Our method outperforms existing shadow removal methods, and our StructNet can be integrated with existing methods to improve them further.
arXiv Detail & Related papers (2023-01-09T06:31:52Z) - Estimating Reflectance Layer from A Single Image: Integrating
Reflectance Guidance and Shadow/Specular Aware Learning [66.36104525390316]
We propose a two-stage learning method, including reflectance guidance and a Shadow/Specular-Aware (S-Aware) network to tackle the problem.
In the first stage, an initial reflectance layer free from shadows and specularities is obtained with the constraint of novel losses.
To further enforce the reflectance layer to be independent of shadows and specularities in the second-stage refinement, we introduce an S-Aware network that distinguishes the reflectance image from the input image.
arXiv Detail & Related papers (2022-11-27T07:26:41Z) - Geometry-aware Single-image Full-body Human Relighting [37.381122678376805]
Single-image human relighting aims to relight a target human under new lighting conditions by decomposing the input image into albedo, shape and lighting.
Previous methods suffer from both the entanglement between albedo and lighting and the lack of hard shadows.
Our framework is able to generate photo-realistic high-frequency shadows such as cast shadows under challenging lighting conditions.
arXiv Detail & Related papers (2022-07-11T10:21:02Z) - Physically-Based Editing of Indoor Scene Lighting from a Single Image [106.60252793395104]
We present a method to edit complex indoor lighting from a single image with its predicted depth and light source segmentation masks.
We tackle this problem using two novel components: 1) a holistic scene reconstruction method that estimates scene reflectance and parametric 3D lighting, and 2) a neural rendering framework that re-renders the scene from our predictions.
arXiv Detail & Related papers (2022-05-19T06:44:37Z) - SIDNet: Learning Shading-aware Illumination Descriptor for Image
Harmonization [10.655037947250516]
Image harmonization aims at adjusting the appearance of the foreground to make it more compatible with the background.
We decompose the image harmonization task into two sub-problems: 1) illumination estimation of the background image and 2) re-rendering of foreground objects under background illumination.
arXiv Detail & Related papers (2021-12-02T15:18:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.