LitAR: Visually Coherent Lighting for Mobile Augmented Reality
- URL: http://arxiv.org/abs/2301.06184v1
- Date: Sun, 15 Jan 2023 20:47:38 GMT
- Title: LitAR: Visually Coherent Lighting for Mobile Augmented Reality
- Authors: Yiqin Zhao, Chongyang Ma, Haibin Huang, Tian Guo
- Abstract summary: We present the design and implementation of a lighting reconstruction framework called LitAR.
LitAR addresses several challenges of supporting lighting information for mobile AR.
- Score: 24.466149552743516
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: An accurate understanding of omnidirectional environment lighting is crucial
for high-quality virtual object rendering in mobile augmented reality (AR). In
particular, to support reflective rendering, existing methods have leveraged
deep learning models to estimate or have used physical light probes to capture
physical lighting, typically represented in the form of an environment map.
However, these methods often fail to provide visually coherent details or
require additional setups. For example, the commercial framework ARKit uses a
convolutional neural network that can generate realistic environment maps;
however the corresponding reflective rendering might not match the physical
environments. In this work, we present the design and implementation of a
lighting reconstruction framework called LitAR that enables realistic and
visually-coherent rendering. LitAR addresses several challenges of supporting
lighting information for mobile AR. First, to address the spatial variance
problem, LitAR uses two-field lighting reconstruction to divide the lighting
reconstruction task into the spatial variance-aware near-field reconstruction
and the directional-aware far-field reconstruction. The corresponding
environment map allows reflective rendering with correct color tones. Second,
LitAR uses two noise-tolerant data capturing policies to ensure data quality,
namely guided bootstrapped movement and motion-based automatic capturing.
Third, to handle the mismatch between the mobile computation capability and the
high computation requirement of lighting reconstruction, LitAR employs two
novel real-time environment map rendering techniques called multi-resolution
projection and anchor extrapolation. These two techniques effectively remove
the need of time-consuming mesh reconstruction while maintaining visual
quality.
Related papers
- CleAR: Robust Context-Guided Generative Lighting Estimation for Mobile Augmented Reality [6.292933471495322]
We propose a generative lighting estimation system called CleAR that can produce high-quality environment maps in the format of 360$circ$ images.
Our end-to-end generative estimation takes as fast as 3.2 seconds, outperforming state-of-the-art methods by 110x.
arXiv Detail & Related papers (2024-11-04T15:37:18Z) - RISE-SDF: a Relightable Information-Shared Signed Distance Field for Glossy Object Inverse Rendering [26.988572852463815]
In this paper, we propose a novel end-to-end relightable neural inverse rendering system.
Our algorithm achieves state-of-the-art performance in inverse rendering and relighting.
Our experiments demonstrate that our algorithm achieves state-of-the-art performance in inverse rendering and relighting.
arXiv Detail & Related papers (2024-09-30T09:42:10Z) - PBIR-NIE: Glossy Object Capture under Non-Distant Lighting [30.325872237020395]
Glossy objects present a significant challenge for 3D reconstruction from multi-view input images under natural lighting.
We introduce PBIR-NIE, an inverse rendering framework designed to holistically capture the geometry, material attributes, and surrounding illumination of such objects.
arXiv Detail & Related papers (2024-08-13T13:26:24Z) - Neural Relighting with Subsurface Scattering by Learning the Radiance
Transfer Gradient [73.52585139592398]
We propose a novel framework for learning the radiance transfer field via volume rendering.
We will release our code and a novel light stage dataset of objects with subsurface scattering effects publicly available.
arXiv Detail & Related papers (2023-06-15T17:56:04Z) - Spatiotemporally Consistent HDR Indoor Lighting Estimation [66.26786775252592]
We propose a physically-motivated deep learning framework to solve the indoor lighting estimation problem.
Given a single LDR image with a depth map, our method predicts spatially consistent lighting at any given image position.
Our framework achieves photorealistic lighting prediction with higher quality compared to state-of-the-art single-image or video-based methods.
arXiv Detail & Related papers (2023-05-07T20:36:29Z) - NeAI: A Pre-convoluted Representation for Plug-and-Play Neural Ambient
Illumination [28.433403714053103]
We propose a framework named neural ambient illumination (NeAI)
NeAI uses Neural Radiance Fields (NeRF) as a lighting model to handle complex lighting in a physically based way.
Experiments demonstrate the superior performance of novel-view rendering compared to previous works.
arXiv Detail & Related papers (2023-04-18T06:32:30Z) - Neural Fields meet Explicit Geometric Representation for Inverse
Rendering of Urban Scenes [62.769186261245416]
We present a novel inverse rendering framework for large urban scenes capable of jointly reconstructing the scene geometry, spatially-varying materials, and HDR lighting from a set of posed RGB images with optional depth.
Specifically, we use a neural field to account for the primary rays, and use an explicit mesh (reconstructed from the underlying neural field) for modeling secondary rays that produce higher-order lighting effects such as cast shadows.
arXiv Detail & Related papers (2023-04-06T17:51:54Z) - Multitask AET with Orthogonal Tangent Regularity for Dark Object
Detection [84.52197307286681]
We propose a novel multitask auto encoding transformation (MAET) model to enhance object detection in a dark environment.
In a self-supervision manner, the MAET learns the intrinsic visual structure by encoding and decoding the realistic illumination-degrading transformation.
We have achieved the state-of-the-art performance using synthetic and real-world datasets.
arXiv Detail & Related papers (2022-05-06T16:27:14Z) - DIB-R++: Learning to Predict Lighting and Material with a Hybrid
Differentiable Renderer [78.91753256634453]
We consider the challenging problem of predicting intrinsic object properties from a single image by exploiting differentiables.
In this work, we propose DIBR++, a hybrid differentiable which supports these effects by combining specularization and ray-tracing.
Compared to more advanced physics-based differentiables, DIBR++ is highly performant due to its compact and expressive model.
arXiv Detail & Related papers (2021-10-30T01:59:39Z) - Object-based Illumination Estimation with Rendering-aware Neural
Networks [56.01734918693844]
We present a scheme for fast environment light estimation from the RGBD appearance of individual objects and their local image areas.
With the estimated lighting, virtual objects can be rendered in AR scenarios with shading that is consistent to the real scene.
arXiv Detail & Related papers (2020-08-06T08:23:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.