Deep Relighting Networks for Image Light Source Manipulation
- URL: http://arxiv.org/abs/2008.08298v2
- Date: Thu, 15 Oct 2020 04:02:17 GMT
- Title: Deep Relighting Networks for Image Light Source Manipulation
- Authors: Li-Wen Wang, Wan-Chi Siu, Zhi-Song Liu, Chu-Tak Li, Daniel P.K. Lun
- Abstract summary: We propose a novel Deep Relighting Network (DRN) with three parts: 1) scene reconversion, 2) shadow prior estimation, and 3) re-renderer.
Experimental results show that the proposed method outperforms other possible methods, both qualitatively and quantitatively.
- Score: 37.15283682572421
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Manipulating the light source of given images is an interesting task and
useful in various applications, including photography and cinematography.
Existing methods usually require additional information like the geometric
structure of the scene, which may not be available for most images. In this
paper, we formulate the single image relighting task and propose a novel Deep
Relighting Network (DRN) with three parts: 1) scene reconversion, which aims to
reveal the primary scene structure through a deep auto-encoder network, 2)
shadow prior estimation, to predict light effect from the new light direction
through adversarial learning, and 3) re-renderer, to combine the primary
structure with the reconstructed shadow view to form the required estimation
under the target light source. Experimental results show that the proposed
method outperforms other possible methods, both qualitatively and
quantitatively. Specifically, the proposed DRN has achieved the best PSNR in
the "AIM2020 - Any to one relighting challenge" of the 2020 ECCV conference.
Related papers
- GS-Phong: Meta-Learned 3D Gaussians for Relightable Novel View Synthesis [63.5925701087252]
We propose a novel method for representing a scene illuminated by a point light using a set of relightable 3D Gaussian points.
Inspired by the Blinn-Phong model, our approach decomposes the scene into ambient, diffuse, and specular components.
To facilitate the decomposition of geometric information independent of lighting conditions, we introduce a novel bilevel optimization-based meta-learning framework.
arXiv Detail & Related papers (2024-05-31T13:48:54Z) - Reconstructing Continuous Light Field From Single Coded Image [7.937367109582907]
We propose a method for reconstructing a continuous light field of a target scene from a single observed image.
Joint aperture-exposure coding implemented in a camera enables effective embedding of 3-D scene information into an observed image.
NeRF-based neural rendering enables high quality view synthesis of a 3-D scene from continuous viewpoints.
arXiv Detail & Related papers (2023-11-16T07:59:01Z) - Factored-NeuS: Reconstructing Surfaces, Illumination, and Materials of
Possibly Glossy Objects [46.04357263321969]
We develop a method that recovers the surface, materials, and illumination of a scene from its posed multi-view images.
It does not require any additional data and can handle glossy objects or bright lighting.
arXiv Detail & Related papers (2023-05-29T07:44:19Z) - Learning to Relight Portrait Images via a Virtual Light Stage and
Synthetic-to-Real Adaptation [76.96499178502759]
Relighting aims to re-illuminate the person in the image as if the person appeared in an environment with the target lighting.
Recent methods rely on deep learning to achieve high-quality results.
We propose a new approach that can perform on par with the state-of-the-art (SOTA) relighting methods without requiring a light stage.
arXiv Detail & Related papers (2022-09-21T17:15:58Z) - Designing An Illumination-Aware Network for Deep Image Relighting [69.750906769976]
We present an Illumination-Aware Network (IAN) which follows the guidance from hierarchical sampling to progressively relight a scene from a single image.
In addition, an Illumination-Aware Residual Block (IARB) is designed to approximate the physical rendering process.
Experimental results show that our proposed method produces better quantitative and qualitative relighting results than previous state-of-the-art methods.
arXiv Detail & Related papers (2022-07-21T16:21:24Z) - Physically-Based Editing of Indoor Scene Lighting from a Single Image [106.60252793395104]
We present a method to edit complex indoor lighting from a single image with its predicted depth and light source segmentation masks.
We tackle this problem using two novel components: 1) a holistic scene reconstruction method that estimates scene reflectance and parametric 3D lighting, and 2) a neural rendering framework that re-renders the scene from our predictions.
arXiv Detail & Related papers (2022-05-19T06:44:37Z) - NVS-MonoDepth: Improving Monocular Depth Prediction with Novel View
Synthesis [74.4983052902396]
We propose a novel training method split in three main steps to improve monocular depth estimation.
Experimental results prove that our method achieves state-of-the-art or comparable performance on the KITTI and NYU-Depth-v2 datasets.
arXiv Detail & Related papers (2021-12-22T12:21:08Z) - Physically Inspired Dense Fusion Networks for Relighting [45.66699760138863]
We propose a model which enriches neural networks with physical insight.
Our method generates the relighted image with new illumination settings via two different strategies.
We show that our proposal can outperform many state-of-the-art methods in terms of well-known fidelity metrics and perceptual loss.
arXiv Detail & Related papers (2021-05-05T17:33:45Z) - Bridge the Vision Gap from Field to Command: A Deep Learning Network
Enhancing Illumination and Details [17.25188250076639]
We propose a two-stream framework named NEID to tune up the brightness and enhance the details simultaneously.
The proposed method consists of three parts: Light Enhancement (LE), Detail Refinement (DR) and Feature Fusing (FF) module.
arXiv Detail & Related papers (2021-01-20T09:39:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.