SIDNet: Learning Shading-aware Illumination Descriptor for Image
Harmonization
- URL: http://arxiv.org/abs/2112.01314v3
- Date: Mon, 18 Sep 2023 09:50:35 GMT
- Title: SIDNet: Learning Shading-aware Illumination Descriptor for Image
Harmonization
- Authors: Zhongyun Hu, Ntumba Elie Nsampi, Xue Wang and Qing Wang
- Abstract summary: Image harmonization aims at adjusting the appearance of the foreground to make it more compatible with the background.
We decompose the image harmonization task into two sub-problems: 1) illumination estimation of the background image and 2) re-rendering of foreground objects under background illumination.
- Score: 10.655037947250516
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image harmonization aims at adjusting the appearance of the foreground to
make it more compatible with the background. Without exploring background
illumination and its effects on the foreground elements, existing works are
incapable of generating a realistic foreground shading. In this paper, we
decompose the image harmonization task into two sub-problems: 1) illumination
estimation of the background image and 2) re-rendering of foreground objects
under background illumination. Before solving these two sub-problems, we first
learn a shading-aware illumination descriptor via a well-designed neural
rendering framework, of which the key is a shading bases module that generates
multiple shading bases from the foreground image. Then we design a background
illumination estimation module to extract the illumination descriptor from the
background. Finally, the Shading-aware Illumination Descriptor is used in
conjunction with the neural rendering framework (SIDNet) to produce the
harmonized foreground image containing a novel harmonized shading. Moreover, we
construct a photo-realistic synthetic image harmonization dataset that contains
numerous shading variations with image-based lighting. Extensive experiments on
both synthetic and real data demonstrate the superiority of the proposed
method, especially in dealing with foreground shadings.
Related papers
- All-frequency Full-body Human Image Relighting [1.529342790344802]
Relighting of human images enables post-photography editing of lighting effects in portraits.
The current mainstream approach uses neural networks to approximate lighting effects without explicitly accounting for the principle of physical shading.
We propose a two-stage relighting method that can reproduce physically-based shadows and shading from low to high frequencies.
arXiv Detail & Related papers (2024-11-01T04:45:48Z) - LightIt: Illumination Modeling and Control for Diffusion Models [61.80461416451116]
We introduce LightIt, a method for explicit illumination control for image generation.
Recent generative methods lack lighting control, which is crucial to numerous artistic aspects of image generation.
Our method is the first that enables the generation of images with controllable, consistent lighting.
arXiv Detail & Related papers (2024-03-15T18:26:33Z) - Recasting Regional Lighting for Shadow Removal [41.107191352835315]
In a shadow region, the degradation degree of object textures depends on the local illumination.
We propose a shadow-aware decomposition network to estimate the illumination and reflectance layers of shadow regions.
We then propose a novel bilateral correction network to recast the lighting of shadow regions in the illumination layer.
arXiv Detail & Related papers (2024-02-01T05:08:39Z) - Relightful Harmonization: Lighting-aware Portrait Background Replacement [23.19641174787912]
We introduce Relightful Harmonization, a lighting-aware diffusion model designed to seamlessly harmonize sophisticated lighting effect for the foreground portrait using any background image.
Our approach unfolds in three stages. First, we introduce a lighting representation module that allows our diffusion model to encode lighting information from target image background.
Second, we introduce an alignment network that aligns lighting features learned from image background with lighting features learned from panorama environment maps.
arXiv Detail & Related papers (2023-12-11T23:20:31Z) - Neural Fields meet Explicit Geometric Representation for Inverse
Rendering of Urban Scenes [62.769186261245416]
We present a novel inverse rendering framework for large urban scenes capable of jointly reconstructing the scene geometry, spatially-varying materials, and HDR lighting from a set of posed RGB images with optional depth.
Specifically, we use a neural field to account for the primary rays, and use an explicit mesh (reconstructed from the underlying neural field) for modeling secondary rays that produce higher-order lighting effects such as cast shadows.
arXiv Detail & Related papers (2023-04-06T17:51:54Z) - Geometry-aware Single-image Full-body Human Relighting [37.381122678376805]
Single-image human relighting aims to relight a target human under new lighting conditions by decomposing the input image into albedo, shape and lighting.
Previous methods suffer from both the entanglement between albedo and lighting and the lack of hard shadows.
Our framework is able to generate photo-realistic high-frequency shadows such as cast shadows under challenging lighting conditions.
arXiv Detail & Related papers (2022-07-11T10:21:02Z) - Physically-Based Editing of Indoor Scene Lighting from a Single Image [106.60252793395104]
We present a method to edit complex indoor lighting from a single image with its predicted depth and light source segmentation masks.
We tackle this problem using two novel components: 1) a holistic scene reconstruction method that estimates scene reflectance and parametric 3D lighting, and 2) a neural rendering framework that re-renders the scene from our predictions.
arXiv Detail & Related papers (2022-05-19T06:44:37Z) - A Shading-Guided Generative Implicit Model for Shape-Accurate 3D-Aware
Image Synthesis [163.96778522283967]
We propose a shading-guided generative implicit model that is able to learn a starkly improved shape representation.
An accurate 3D shape should also yield a realistic rendering under different lighting conditions.
Our experiments on multiple datasets show that the proposed approach achieves photorealistic 3D-aware image synthesis.
arXiv Detail & Related papers (2021-10-29T10:53:12Z) - Deep CG2Real: Synthetic-to-Real Translation via Image Disentanglement [78.58603635621591]
Training an unpaired synthetic-to-real translation network in image space is severely under-constrained.
We propose a semi-supervised approach that operates on the disentangled shading and albedo layers of the image.
Our two-stage pipeline first learns to predict accurate shading in a supervised fashion using physically-based renderings as targets.
arXiv Detail & Related papers (2020-03-27T21:45:41Z) - BachGAN: High-Resolution Image Synthesis from Salient Object Layout [78.51640906030244]
We propose a new task towards more practical application for image generation - high-quality image synthesis from salient object layout.
Two main challenges spring from this new task: (i) how to generate fine-grained details and realistic textures without segmentation map input; and (ii) how to create a background and weave it seamlessly into standalone objects.
By generating the hallucinated background representation dynamically, our model can synthesize high-resolution images with both photo-realistic foreground and integral background.
arXiv Detail & Related papers (2020-03-26T00:54:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.