Spatially and color consistent environment lighting estimation using
deep neural networks for mixed reality
- URL: http://arxiv.org/abs/2108.07903v1
- Date: Tue, 17 Aug 2021 23:03:55 GMT
- Title: Spatially and color consistent environment lighting estimation using
deep neural networks for mixed reality
- Authors: Bruno Augusto Dorta Marques, Esteban Walter Gonzalez Clua, Anselmo
Antunes Montenegro, Cristina Nader Vasconcelos
- Abstract summary: This paper presents a CNN-based model to estimate complex lighting for mixed reality environments.
We propose a new CNN architecture that inputs an RGB image and recognizes, in real-time, the environment lighting.
We show in experiments that the CNN architecture can predict the environment lighting with an average mean squared error (MSE) of num7.85e-04 when comparing SH lighting coefficients.
- Score: 1.1470070927586016
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The representation of consistent mixed reality (XR) environments requires
adequate real and virtual illumination composition in real-time. Estimating the
lighting of a real scenario is still a challenge. Due to the ill-posed nature
of the problem, classical inverse-rendering techniques tackle the problem for
simple lighting setups. However, those assumptions do not satisfy the current
state-of-art in computer graphics and XR applications. While many recent works
solve the problem using machine learning techniques to estimate the environment
light and scene's materials, most of them are limited to geometry or previous
knowledge. This paper presents a CNN-based model to estimate complex lighting
for mixed reality environments with no previous information about the scene. We
model the environment illumination using a set of spherical harmonics (SH)
environment lighting, capable of efficiently represent area lighting. We
propose a new CNN architecture that inputs an RGB image and recognizes, in
real-time, the environment lighting. Unlike previous CNN-based lighting
estimation methods, we propose using a highly optimized deep neural network
architecture, with a reduced number of parameters, that can learn high complex
lighting scenarios from real-world high-dynamic-range (HDR) environment images.
We show in the experiments that the CNN architecture can predict the
environment lighting with an average mean squared error (MSE) of \num{7.85e-04}
when comparing SH lighting coefficients. We validate our model in a variety of
mixed reality scenarios. Furthermore, we present qualitative results comparing
relights of real-world scenes.
Related papers
- CleAR: Robust Context-Guided Generative Lighting Estimation for Mobile Augmented Reality [6.292933471495322]
We propose a generative lighting estimation system called CleAR that can produce high-quality environment maps in the format of 360$circ$ images.
Our end-to-end generative estimation takes as fast as 3.2 seconds, outperforming state-of-the-art methods by 110x.
arXiv Detail & Related papers (2024-11-04T15:37:18Z) - NieR: Normal-Based Lighting Scene Rendering [17.421326290704844]
NieR (Normal-Based Lighting Scene Rendering) is a novel framework that takes into account the nuances of light reflection on diverse material surfaces.
We present the LD (Light Decomposition) module, which captures the lighting reflection characteristics on surfaces.
We also propose the HNGD (Hierarchical Normal Gradient Densification) module to overcome the limitations of sparse Gaussian representation.
arXiv Detail & Related papers (2024-05-21T14:24:43Z) - Low-Light Image Enhancement with Illumination-Aware Gamma Correction and
Complete Image Modelling Network [69.96295927854042]
Low-light environments usually lead to less informative large-scale dark areas.
We propose to integrate the effectiveness of gamma correction with the strong modelling capacities of deep networks.
Because exponential operation introduces high computational complexity, we propose to use Taylor Series to approximate gamma correction.
arXiv Detail & Related papers (2023-08-16T08:46:51Z) - Spatiotemporally Consistent HDR Indoor Lighting Estimation [66.26786775252592]
We propose a physically-motivated deep learning framework to solve the indoor lighting estimation problem.
Given a single LDR image with a depth map, our method predicts spatially consistent lighting at any given image position.
Our framework achieves photorealistic lighting prediction with higher quality compared to state-of-the-art single-image or video-based methods.
arXiv Detail & Related papers (2023-05-07T20:36:29Z) - EverLight: Indoor-Outdoor Editable HDR Lighting Estimation [9.443561684223514]
We propose a method which combines a parametric light model with 360deg panoramas, ready to use as HDRI in rendering engines.
In our representation, users can easily edit light direction, intensity, number, etc. to impact shading while providing rich, complex reflections while seamlessly blending with the edits.
arXiv Detail & Related papers (2023-04-26T00:20:59Z) - Neural Radiance Transfer Fields for Relightable Novel-view Synthesis
with Global Illumination [63.992213016011235]
We propose a method for scene relighting under novel views by learning a neural precomputed radiance transfer function.
Our method can be solely supervised on a set of real images of the scene under a single unknown lighting condition.
Results show that the recovered disentanglement of scene parameters improves significantly over the current state of the art.
arXiv Detail & Related papers (2022-07-27T16:07:48Z) - Sparse Needlets for Lighting Estimation with Spherical Transport Loss [89.52531416604774]
NeedleLight is a new lighting estimation model that represents illumination with needlets and allows lighting estimation in both frequency domain and spatial domain jointly.
Extensive experiments show that NeedleLight achieves superior lighting estimation consistently across multiple evaluation metrics as compared with state-of-the-art methods.
arXiv Detail & Related papers (2021-06-24T15:19:42Z) - Spatially-Varying Outdoor Lighting Estimation from Intrinsics [66.04683041837784]
We present SOLID-Net, a neural network for spatially-varying outdoor lighting estimation.
We generate spatially-varying local lighting environment maps by combining global sky environment map with warped image information.
Experiments on both synthetic and real datasets show that SOLID-Net significantly outperforms previous methods.
arXiv Detail & Related papers (2021-04-09T02:28:54Z) - Object-based Illumination Estimation with Rendering-aware Neural
Networks [56.01734918693844]
We present a scheme for fast environment light estimation from the RGBD appearance of individual objects and their local image areas.
With the estimated lighting, virtual objects can be rendered in AR scenarios with shading that is consistent to the real scene.
arXiv Detail & Related papers (2020-08-06T08:23:19Z) - IllumiNet: Transferring Illumination from Planar Surfaces to Virtual
Objects in Augmented Reality [38.83696624634213]
This paper presents an illumination estimation method for virtual objects in real environment by learning.
Given a single RGB image, our method directly infers the relit virtual object by transferring the illumination features extracted from planar surfaces in the scene to the desired geometries.
arXiv Detail & Related papers (2020-07-12T13:11:14Z) - Deep Lighting Environment Map Estimation from Spherical Panoramas [0.0]
We present a data-driven model that estimates an HDR lighting environment map from a single LDR monocular spherical panorama.
We exploit the availability of surface geometry to employ image-based relighting as a data generator and supervision mechanism.
arXiv Detail & Related papers (2020-05-16T14:23:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.