LitAR: Visually Coherent Lighting for Mobile Augmented Reality
- URL: http://arxiv.org/abs/2301.06184v1
- Date: Sun, 15 Jan 2023 20:47:38 GMT
- Title: LitAR: Visually Coherent Lighting for Mobile Augmented Reality
- Authors: Yiqin Zhao, Chongyang Ma, Haibin Huang, Tian Guo
- Abstract summary: We present the design and implementation of a lighting reconstruction framework called LitAR.
LitAR addresses several challenges of supporting lighting information for mobile AR.
- Score: 24.466149552743516
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: An accurate understanding of omnidirectional environment lighting is crucial
for high-quality virtual object rendering in mobile augmented reality (AR). In
particular, to support reflective rendering, existing methods have leveraged
deep learning models to estimate or have used physical light probes to capture
physical lighting, typically represented in the form of an environment map.
However, these methods often fail to provide visually coherent details or
require additional setups. For example, the commercial framework ARKit uses a
convolutional neural network that can generate realistic environment maps;
however the corresponding reflective rendering might not match the physical
environments. In this work, we present the design and implementation of a
lighting reconstruction framework called LitAR that enables realistic and
visually-coherent rendering. LitAR addresses several challenges of supporting
lighting information for mobile AR. First, to address the spatial variance
problem, LitAR uses two-field lighting reconstruction to divide the lighting
reconstruction task into the spatial variance-aware near-field reconstruction
and the directional-aware far-field reconstruction. The corresponding
environment map allows reflective rendering with correct color tones. Second,
LitAR uses two noise-tolerant data capturing policies to ensure data quality,
namely guided bootstrapped movement and motion-based automatic capturing.
Third, to handle the mismatch between the mobile computation capability and the
high computation requirement of lighting reconstruction, LitAR employs two
novel real-time environment map rendering techniques called multi-resolution
projection and anchor extrapolation. These two techniques effectively remove
the need of time-consuming mesh reconstruction while maintaining visual
quality.
Related papers
- REFRAME: Reflective Surface Real-Time Rendering for Mobile Devices [51.983541908241726]
This work tackles the challenging task of achieving real-time novel view synthesis on various scenes.
Existing real-time rendering methods, especially those based on meshes, often have subpar performance in modeling surfaces with rich view-dependent appearances.
arXiv Detail & Related papers (2024-03-25T07:07:50Z) - Neural Relighting with Subsurface Scattering by Learning the Radiance
Transfer Gradient [73.52585139592398]
We propose a novel framework for learning the radiance transfer field via volume rendering.
We will release our code and a novel light stage dataset of objects with subsurface scattering effects publicly available.
arXiv Detail & Related papers (2023-06-15T17:56:04Z) - Spatiotemporally Consistent HDR Indoor Lighting Estimation [66.26786775252592]
We propose a physically-motivated deep learning framework to solve the indoor lighting estimation problem.
Given a single LDR image with a depth map, our method predicts spatially consistent lighting at any given image position.
Our framework achieves photorealistic lighting prediction with higher quality compared to state-of-the-art single-image or video-based methods.
arXiv Detail & Related papers (2023-05-07T20:36:29Z) - NeAI: A Pre-convoluted Representation for Plug-and-Play Neural Ambient
Illumination [28.433403714053103]
We propose a framework named neural ambient illumination (NeAI)
NeAI uses Neural Radiance Fields (NeRF) as a lighting model to handle complex lighting in a physically based way.
Experiments demonstrate the superior performance of novel-view rendering compared to previous works.
arXiv Detail & Related papers (2023-04-18T06:32:30Z) - Multi-Camera Lighting Estimation for Photorealistic Front-Facing Mobile
Augmented Reality [6.41726492515401]
Lighting understanding plays an important role in virtual object composition, including mobile augmented reality (AR) applications.
We propose to leverage dual-camera streaming to generate a high-quality environment map by combining multi-view lighting reconstruction and parametric directional lighting estimation.
arXiv Detail & Related papers (2023-01-15T16:52:59Z) - Multitask AET with Orthogonal Tangent Regularity for Dark Object
Detection [84.52197307286681]
We propose a novel multitask auto encoding transformation (MAET) model to enhance object detection in a dark environment.
In a self-supervision manner, the MAET learns the intrinsic visual structure by encoding and decoding the realistic illumination-degrading transformation.
We have achieved the state-of-the-art performance using synthetic and real-world datasets.
arXiv Detail & Related papers (2022-05-06T16:27:14Z) - DIB-R++: Learning to Predict Lighting and Material with a Hybrid
Differentiable Renderer [78.91753256634453]
We consider the challenging problem of predicting intrinsic object properties from a single image by exploiting differentiables.
In this work, we propose DIBR++, a hybrid differentiable which supports these effects by combining specularization and ray-tracing.
Compared to more advanced physics-based differentiables, DIBR++ is highly performant due to its compact and expressive model.
arXiv Detail & Related papers (2021-10-30T01:59:39Z) - Neural Ray-Tracing: Learning Surfaces and Reflectance for Relighting and
View Synthesis [28.356700318603565]
We explicitly model the light transport between scene surfaces and we rely on traditional integration schemes and the rendering equation to reconstruct a scene.
By learning decomposed transport with surface representations established in conventional rendering methods, the method naturally facilitates editing shape, reflectance, lighting and scene composition.
We validate the proposed approach for scene editing, relighting and reflectance estimation learned from synthetic and captured views on a subset of NeRV's datasets.
arXiv Detail & Related papers (2021-04-28T03:47:48Z) - Object-based Illumination Estimation with Rendering-aware Neural
Networks [56.01734918693844]
We present a scheme for fast environment light estimation from the RGBD appearance of individual objects and their local image areas.
With the estimated lighting, virtual objects can be rendered in AR scenarios with shading that is consistent to the real scene.
arXiv Detail & Related papers (2020-08-06T08:23:19Z) - PointAR: Efficient Lighting Estimation for Mobile Augmented Reality [7.58114840374767]
We propose an efficient lighting estimation pipeline that is suitable to run on modern mobile devices.
PointAR takes a single RGB-D image captured from the mobile camera and a 2D location in that image, and estimates 2nd order spherical harmonics coefficients.
arXiv Detail & Related papers (2020-03-30T19:13:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.