Deep Graph Learning for Spatially-Varying Indoor Lighting Prediction
- URL: http://arxiv.org/abs/2202.06300v1
- Date: Sun, 13 Feb 2022 12:49:37 GMT
- Title: Deep Graph Learning for Spatially-Varying Indoor Lighting Prediction
- Authors: Jiayang Bai, Jie Guo, Chenchen Wan, Zhenyu Chen, Zhen He, Shan Yang,
Piaopiao Yu, Yan Zhang and Yanwen Guo
- Abstract summary: We propose a graph learning-based framework for indoor lighting estimation.
At its core is a new lighting model (dubbed DSGLight) based on depth-augmented Spherical Gaussians.
Our method obviously outperforms existing methods both qualitatively and quantitatively.
- Score: 25.726519831985918
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Lighting prediction from a single image is becoming increasingly important in
many vision and augmented reality (AR) applications in which shading and shadow
consistency between virtual and real objects should be guaranteed. However,
this is a notoriously ill-posed problem, especially for indoor scenarios,
because of the complexity of indoor luminaires and the limited information
involved in 2D images. In this paper, we propose a graph learning-based
framework for indoor lighting estimation. At its core is a new lighting model
(dubbed DSGLight) based on depth-augmented Spherical Gaussians (SG) and a Graph
Convolutional Network (GCN) that infers the new lighting representation from a
single LDR image of limited field-of-view. Our lighting model builds 128 evenly
distributed SGs over the indoor panorama, where each SG encoding the lighting
and the depth around that node. The proposed GCN then learns the mapping from
the input image to DSGLight. Compared with existing lighting models, our
DSGLight encodes both direct lighting and indirect environmental lighting more
faithfully and compactly. It also makes network training and inference more
stable. The estimated depth distribution enables temporally stable shading and
shadows under spatially-varying lighting. Through thorough experiments, we show
that our method obviously outperforms existing methods both qualitatively and
quantitatively.
Related papers
- You Only Need One Color Space: An Efficient Network for Low-light Image Enhancement [50.37253008333166]
Low-Light Image Enhancement (LLIE) task tends to restore the details and visual information from corrupted low-light images.
We propose a novel trainable color space, named Horizontal/Vertical-Intensity (HVI)
It not only decouples brightness and color from RGB channels to mitigate the instability during enhancement but also adapts to low-light images in different illumination ranges due to the trainable parameters.
arXiv Detail & Related papers (2024-02-08T16:47:43Z) - Low-Light Image Enhancement with Illumination-Aware Gamma Correction and
Complete Image Modelling Network [69.96295927854042]
Low-light environments usually lead to less informative large-scale dark areas.
We propose to integrate the effectiveness of gamma correction with the strong modelling capacities of deep networks.
Because exponential operation introduces high computational complexity, we propose to use Taylor Series to approximate gamma correction.
arXiv Detail & Related papers (2023-08-16T08:46:51Z) - Spatiotemporally Consistent HDR Indoor Lighting Estimation [66.26786775252592]
We propose a physically-motivated deep learning framework to solve the indoor lighting estimation problem.
Given a single LDR image with a depth map, our method predicts spatially consistent lighting at any given image position.
Our framework achieves photorealistic lighting prediction with higher quality compared to state-of-the-art single-image or video-based methods.
arXiv Detail & Related papers (2023-05-07T20:36:29Z) - DiFaReli: Diffusion Face Relighting [13.000032155650835]
We present a novel approach to single-view face relighting in the wild.
Handling non-diffuse effects, such as global illumination or cast shadows, has long been a challenge in face relighting.
We achieve state-of-the-art performance on standard benchmark Multi-PIE and can photorealistically relight in-the-wild images.
arXiv Detail & Related papers (2023-04-19T08:03:20Z) - Physically-Based Editing of Indoor Scene Lighting from a Single Image [106.60252793395104]
We present a method to edit complex indoor lighting from a single image with its predicted depth and light source segmentation masks.
We tackle this problem using two novel components: 1) a holistic scene reconstruction method that estimates scene reflectance and parametric 3D lighting, and 2) a neural rendering framework that re-renders the scene from our predictions.
arXiv Detail & Related papers (2022-05-19T06:44:37Z) - Spatially-Varying Outdoor Lighting Estimation from Intrinsics [66.04683041837784]
We present SOLID-Net, a neural network for spatially-varying outdoor lighting estimation.
We generate spatially-varying local lighting environment maps by combining global sky environment map with warped image information.
Experiments on both synthetic and real datasets show that SOLID-Net significantly outperforms previous methods.
arXiv Detail & Related papers (2021-04-09T02:28:54Z) - Deep Lighting Environment Map Estimation from Spherical Panoramas [0.0]
We present a data-driven model that estimates an HDR lighting environment map from a single LDR monocular spherical panorama.
We exploit the availability of surface geometry to employ image-based relighting as a data generator and supervision mechanism.
arXiv Detail & Related papers (2020-05-16T14:23:05Z) - PointAR: Efficient Lighting Estimation for Mobile Augmented Reality [7.58114840374767]
We propose an efficient lighting estimation pipeline that is suitable to run on modern mobile devices.
PointAR takes a single RGB-D image captured from the mobile camera and a 2D location in that image, and estimates 2nd order spherical harmonics coefficients.
arXiv Detail & Related papers (2020-03-30T19:13:26Z) - Zero-Reference Deep Curve Estimation for Low-Light Image Enhancement [156.18634427704583]
The paper presents a novel method, Zero-Reference Deep Curve Estimation (Zero-DCE), which formulates light enhancement as a task of image-specific curve estimation with a deep network.
Our method trains a lightweight deep network, DCE-Net, to estimate pixel-wise and high-order curves for dynamic range adjustment of a given image.
arXiv Detail & Related papers (2020-01-19T13:49:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.