LitAR: Visually Coherent Lighting for Mobile Augmented Reality
- URL: http://arxiv.org/abs/2301.06184v1
- Date: Sun, 15 Jan 2023 20:47:38 GMT
- Title: LitAR: Visually Coherent Lighting for Mobile Augmented Reality
- Authors: Yiqin Zhao, Chongyang Ma, Haibin Huang, Tian Guo
- Abstract summary: We present the design and implementation of a lighting reconstruction framework called LitAR.
LitAR addresses several challenges of supporting lighting information for mobile AR.
- Score: 24.466149552743516
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: An accurate understanding of omnidirectional environment lighting is crucial
for high-quality virtual object rendering in mobile augmented reality (AR). In
particular, to support reflective rendering, existing methods have leveraged
deep learning models to estimate or have used physical light probes to capture
physical lighting, typically represented in the form of an environment map.
However, these methods often fail to provide visually coherent details or
require additional setups. For example, the commercial framework ARKit uses a
convolutional neural network that can generate realistic environment maps;
however the corresponding reflective rendering might not match the physical
environments. In this work, we present the design and implementation of a
lighting reconstruction framework called LitAR that enables realistic and
visually-coherent rendering. LitAR addresses several challenges of supporting
lighting information for mobile AR. First, to address the spatial variance
problem, LitAR uses two-field lighting reconstruction to divide the lighting
reconstruction task into the spatial variance-aware near-field reconstruction
and the directional-aware far-field reconstruction. The corresponding
environment map allows reflective rendering with correct color tones. Second,
LitAR uses two noise-tolerant data capturing policies to ensure data quality,
namely guided bootstrapped movement and motion-based automatic capturing.
Third, to handle the mismatch between the mobile computation capability and the
high computation requirement of lighting reconstruction, LitAR employs two
novel real-time environment map rendering techniques called multi-resolution
projection and anchor extrapolation. These two techniques effectively remove
the need of time-consuming mesh reconstruction while maintaining visual
quality.
Related papers
- IDArb: Intrinsic Decomposition for Arbitrary Number of Input Views and Illuminations [64.07859467542664]
Capturing geometric and material information from images remains a fundamental challenge in computer vision and graphics.
Traditional optimization-based methods often require hours of computational time to reconstruct geometry, material properties, and environmental lighting from dense multi-view inputs.
We introduce IDArb, a diffusion-based model designed to perform intrinsic decomposition on an arbitrary number of images under varying illuminations.
arXiv Detail & Related papers (2024-12-16T18:52:56Z) - ReCap: Better Gaussian Relighting with Cross-Environment Captures [51.2614945509044]
In this work, we present ReCap, treating cross-environment captures as multi-task target to provide the missing supervision that cuts through the entanglement.
Specifically, ReCap jointly optimize multiple lighting representations that share a common set of material attributes.
This naturally harmonizes a coherent set of lighting representations around the mutual material attributes, exploiting commonalities and differences across varied object appearances.
Together with a streamlined shading function and effective post-processing, ReCap outperforms the leading competitor by 3.4 dB in PSNR on an expanded relighting benchmark.
arXiv Detail & Related papers (2024-12-10T14:15:32Z) - CleAR: Robust Context-Guided Generative Lighting Estimation for Mobile Augmented Reality [6.292933471495322]
We propose a generative lighting estimation system called CleAR that can produce high-quality environment maps in the format of 360$circ$ images.
Our end-to-end generative estimation takes as fast as 3.2 seconds, outperforming state-of-the-art methods by 110x.
arXiv Detail & Related papers (2024-11-04T15:37:18Z) - RISE-SDF: a Relightable Information-Shared Signed Distance Field for Glossy Object Inverse Rendering [26.988572852463815]
In this paper, we propose a novel end-to-end relightable neural inverse rendering system.
Our algorithm achieves state-of-the-art performance in inverse rendering and relighting.
Our experiments demonstrate that our algorithm achieves state-of-the-art performance in inverse rendering and relighting.
arXiv Detail & Related papers (2024-09-30T09:42:10Z) - PBIR-NIE: Glossy Object Capture under Non-Distant Lighting [30.325872237020395]
Glossy objects present a significant challenge for 3D reconstruction from multi-view input images under natural lighting.
We introduce PBIR-NIE, an inverse rendering framework designed to holistically capture the geometry, material attributes, and surrounding illumination of such objects.
arXiv Detail & Related papers (2024-08-13T13:26:24Z) - Spatiotemporally Consistent HDR Indoor Lighting Estimation [66.26786775252592]
We propose a physically-motivated deep learning framework to solve the indoor lighting estimation problem.
Given a single LDR image with a depth map, our method predicts spatially consistent lighting at any given image position.
Our framework achieves photorealistic lighting prediction with higher quality compared to state-of-the-art single-image or video-based methods.
arXiv Detail & Related papers (2023-05-07T20:36:29Z) - NeAI: A Pre-convoluted Representation for Plug-and-Play Neural Ambient
Illumination [28.433403714053103]
We propose a framework named neural ambient illumination (NeAI)
NeAI uses Neural Radiance Fields (NeRF) as a lighting model to handle complex lighting in a physically based way.
Experiments demonstrate the superior performance of novel-view rendering compared to previous works.
arXiv Detail & Related papers (2023-04-18T06:32:30Z) - Multitask AET with Orthogonal Tangent Regularity for Dark Object
Detection [84.52197307286681]
We propose a novel multitask auto encoding transformation (MAET) model to enhance object detection in a dark environment.
In a self-supervision manner, the MAET learns the intrinsic visual structure by encoding and decoding the realistic illumination-degrading transformation.
We have achieved the state-of-the-art performance using synthetic and real-world datasets.
arXiv Detail & Related papers (2022-05-06T16:27:14Z) - DIB-R++: Learning to Predict Lighting and Material with a Hybrid
Differentiable Renderer [78.91753256634453]
We consider the challenging problem of predicting intrinsic object properties from a single image by exploiting differentiables.
In this work, we propose DIBR++, a hybrid differentiable which supports these effects by combining specularization and ray-tracing.
Compared to more advanced physics-based differentiables, DIBR++ is highly performant due to its compact and expressive model.
arXiv Detail & Related papers (2021-10-30T01:59:39Z) - Object-based Illumination Estimation with Rendering-aware Neural
Networks [56.01734918693844]
We present a scheme for fast environment light estimation from the RGBD appearance of individual objects and their local image areas.
With the estimated lighting, virtual objects can be rendered in AR scenarios with shading that is consistent to the real scene.
arXiv Detail & Related papers (2020-08-06T08:23:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.