Designing An Illumination-Aware Network for Deep Image Relighting
- URL: http://arxiv.org/abs/2207.10582v1
- Date: Thu, 21 Jul 2022 16:21:24 GMT
- Title: Designing An Illumination-Aware Network for Deep Image Relighting
- Authors: Zuo-Liang Zhu, Zhen Li, Rui-Xun Zhang, Chun-Le Guo, Ming-Ming Cheng
- Abstract summary: We present an Illumination-Aware Network (IAN) which follows the guidance from hierarchical sampling to progressively relight a scene from a single image.
In addition, an Illumination-Aware Residual Block (IARB) is designed to approximate the physical rendering process.
Experimental results show that our proposed method produces better quantitative and qualitative relighting results than previous state-of-the-art methods.
- Score: 69.750906769976
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Lighting is a determining factor in photography that affects the style,
expression of emotion, and even quality of images. Creating or finding
satisfying lighting conditions, in reality, is laborious and time-consuming, so
it is of great value to develop a technology to manipulate illumination in an
image as post-processing. Although previous works have explored techniques
based on the physical viewpoint for relighting images, extensive supervisions
and prior knowledge are necessary to generate reasonable images, restricting
the generalization ability of these works. In contrast, we take the viewpoint
of image-to-image translation and implicitly merge ideas of the conventional
physical viewpoint. In this paper, we present an Illumination-Aware Network
(IAN) which follows the guidance from hierarchical sampling to progressively
relight a scene from a single image with high efficiency. In addition, an
Illumination-Aware Residual Block (IARB) is designed to approximate the
physical rendering process and to extract precise descriptors of light sources
for further manipulations. We also introduce a depth-guided geometry encoder
for acquiring valuable geometry- and structure-related representations once the
depth information is available. Experimental results show that our proposed
method produces better quantitative and qualitative relighting results than
previous state-of-the-art methods. The code and models are publicly available
on https://github.com/NK-CS-ZZL/IAN.
Related papers
- Materialist: Physically Based Editing Using Single-Image Inverse Rendering [50.39048790589746]
We present a method combining a learning-based approach with progressive differentiable rendering.
Our method achieves more realistic light material interactions, accurate shadows, and global illumination.
We also propose a method for material transparency editing that operates effectively without requiring full scene geometry.
arXiv Detail & Related papers (2025-01-07T11:52:01Z) - Zero-Shot Low Light Image Enhancement with Diffusion Prior [2.102429358229889]
We introduce a novel zero-shot method for controlling and refining the generative behavior of diffusion models for dark-to-light image conversion tasks.
Our method demonstrates superior performance over existing state-of-the-art methods in the task of low-light image enhancement.
arXiv Detail & Related papers (2024-12-18T00:31:18Z) - IDArb: Intrinsic Decomposition for Arbitrary Number of Input Views and Illuminations [64.07859467542664]
Capturing geometric and material information from images remains a fundamental challenge in computer vision and graphics.
Traditional optimization-based methods often require hours of computational time to reconstruct geometry, material properties, and environmental lighting from dense multi-view inputs.
We introduce IDArb, a diffusion-based model designed to perform intrinsic decomposition on an arbitrary number of images under varying illuminations.
arXiv Detail & Related papers (2024-12-16T18:52:56Z) - CodeEnhance: A Codebook-Driven Approach for Low-Light Image Enhancement [97.95330185793358]
Low-light image enhancement (LLIE) aims to improve low-illumination images.
Existing methods face two challenges: uncertainty in restoration from diverse brightness degradations and loss of texture and color information.
We propose a novel enhancement approach, CodeEnhance, by leveraging quantized priors and image refinement.
arXiv Detail & Related papers (2024-04-08T07:34:39Z) - Learning to Relight Portrait Images via a Virtual Light Stage and
Synthetic-to-Real Adaptation [76.96499178502759]
Relighting aims to re-illuminate the person in the image as if the person appeared in an environment with the target lighting.
Recent methods rely on deep learning to achieve high-quality results.
We propose a new approach that can perform on par with the state-of-the-art (SOTA) relighting methods without requiring a light stage.
arXiv Detail & Related papers (2022-09-21T17:15:58Z) - Physically Inspired Dense Fusion Networks for Relighting [45.66699760138863]
We propose a model which enriches neural networks with physical insight.
Our method generates the relighted image with new illumination settings via two different strategies.
We show that our proposal can outperform many state-of-the-art methods in terms of well-known fidelity metrics and perceptual loss.
arXiv Detail & Related papers (2021-05-05T17:33:45Z) - Light Stage Super-Resolution: Continuous High-Frequency Relighting [58.09243542908402]
We propose a learning-based solution for the "super-resolution" of scans of human faces taken from a light stage.
Our method aggregates the captured images corresponding to neighboring lights in the stage, and uses a neural network to synthesize a rendering of the face.
Our learned model is able to produce renderings for arbitrary light directions that exhibit realistic shadows and specular highlights.
arXiv Detail & Related papers (2020-10-17T23:40:43Z) - Deep Relighting Networks for Image Light Source Manipulation [37.15283682572421]
We propose a novel Deep Relighting Network (DRN) with three parts: 1) scene reconversion, 2) shadow prior estimation, and 3) re-renderer.
Experimental results show that the proposed method outperforms other possible methods, both qualitatively and quantitatively.
arXiv Detail & Related papers (2020-08-19T07:03:23Z) - Towards Geometry Guided Neural Relighting with Flash Photography [26.511476565209026]
We propose a framework for image relighting from a single flash photograph with its corresponding depth map using deep learning.
We experimentally validate the advantage of our geometry guided approach over state-of-the-art image-based approaches in intrinsic image decomposition and image relighting.
arXiv Detail & Related papers (2020-08-12T08:03:28Z) - Burst Denoising of Dark Images [19.85860245798819]
We propose a deep learning framework for obtaining clean and colorful RGB images from extremely dark raw images.
The backbone of our framework is a novel coarse-to-fine network architecture that generates high-quality outputs in a progressive manner.
Our experiments demonstrate that the proposed approach leads to perceptually more pleasing results than state-of-the-art methods.
arXiv Detail & Related papers (2020-03-17T17:17:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.