Multi-Camera Lighting Estimation for Photorealistic Front-Facing Mobile
Augmented Reality
- URL: http://arxiv.org/abs/2301.06143v1
- Date: Sun, 15 Jan 2023 16:52:59 GMT
- Title: Multi-Camera Lighting Estimation for Photorealistic Front-Facing Mobile
Augmented Reality
- Authors: Yiqin Zhao, Sean Fanello, Tian Guo
- Abstract summary: Lighting understanding plays an important role in virtual object composition, including mobile augmented reality (AR) applications.
We propose to leverage dual-camera streaming to generate a high-quality environment map by combining multi-view lighting reconstruction and parametric directional lighting estimation.
- Score: 6.41726492515401
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Lighting understanding plays an important role in virtual object composition,
including mobile augmented reality (AR) applications. Prior work often targets
recovering lighting from the physical environment to support photorealistic AR
rendering. Because the common workflow is to use a back-facing camera to
capture the physical world for overlaying virtual objects, we refer to this
usage pattern as back-facing AR. However, existing methods often fall short in
supporting emerging front-facing mobile AR applications, e.g., virtual try-on
where a user leverages a front-facing camera to explore the effect of various
products (e.g., glasses or hats) of different styles. This lack of support can
be attributed to the unique challenges of obtaining 360$^\circ$ HDR environment
maps, an ideal format of lighting representation, from the front-facing camera
and existing techniques. In this paper, we propose to leverage dual-camera
streaming to generate a high-quality environment map by combining multi-view
lighting reconstruction and parametric directional lighting estimation. Our
preliminary results show improved rendering quality using a dual-camera setup
for front-facing AR compared to a commercial solution.
Related papers
- CleAR: Robust Context-Guided Generative Lighting Estimation for Mobile Augmented Reality [6.292933471495322]
We propose a generative lighting estimation system called CleAR that can produce high-quality environment maps in the format of 360$circ$ images.
Our end-to-end generative estimation takes as fast as 3.2 seconds, outperforming state-of-the-art methods by 110x.
arXiv Detail & Related papers (2024-11-04T15:37:18Z) - Lite2Relight: 3D-aware Single Image Portrait Relighting [87.62069509622226]
Lite2Relight is a novel technique that can predict 3D consistent head poses of portraits.
By utilizing a pre-trained geometry-aware encoder and a feature alignment module, we map input images into a relightable 3D space.
This includes producing 3D-consistent results of the full head, including hair, eyes, and expressions.
arXiv Detail & Related papers (2024-07-15T07:16:11Z) - Relighting Scenes with Object Insertions in Neural Radiance Fields [24.18050535794117]
We propose a novel NeRF-based pipeline for inserting object NeRFs into scene NeRFs.
The proposed method achieves realistic relighting effects in extensive experimental evaluations.
arXiv Detail & Related papers (2024-06-21T00:58:58Z) - IRIS: Inverse Rendering of Indoor Scenes from Low Dynamic Range Images [32.83096814910201]
We present a method that recovers the physically based material properties and lighting of a scene from multi-view, low-dynamic-range (LDR) images.
Our method outperforms existing methods taking LDR images as input, and allows for highly realistic relighting and object insertion.
arXiv Detail & Related papers (2024-01-23T18:59:56Z) - Spatiotemporally Consistent HDR Indoor Lighting Estimation [66.26786775252592]
We propose a physically-motivated deep learning framework to solve the indoor lighting estimation problem.
Given a single LDR image with a depth map, our method predicts spatially consistent lighting at any given image position.
Our framework achieves photorealistic lighting prediction with higher quality compared to state-of-the-art single-image or video-based methods.
arXiv Detail & Related papers (2023-05-07T20:36:29Z) - WildLight: In-the-wild Inverse Rendering with a Flashlight [77.31815397135381]
We propose a practical photometric solution for in-the-wild inverse rendering under unknown ambient lighting.
Our system recovers scene geometry and reflectance using only multi-view images captured by a smartphone.
We demonstrate by extensive experiments that our method is easy to implement, casual to set up, and consistently outperforms existing in-the-wild inverse rendering techniques.
arXiv Detail & Related papers (2023-03-24T17:59:56Z) - LitAR: Visually Coherent Lighting for Mobile Augmented Reality [24.466149552743516]
We present the design and implementation of a lighting reconstruction framework called LitAR.
LitAR addresses several challenges of supporting lighting information for mobile AR.
arXiv Detail & Related papers (2023-01-15T20:47:38Z) - Perceptual Image Enhancement for Smartphone Real-Time Applications [60.45737626529091]
We propose LPIENet, a lightweight network for perceptual image enhancement.
Our model can deal with noise artifacts, diffraction artifacts, blur, and HDR overexposure.
Our model can process 2K resolution images under 1 second in mid-level commercial smartphones.
arXiv Detail & Related papers (2022-10-24T19:16:33Z) - On-the-go Reflectance Transformation Imaging with Ordinary Smartphones [5.381004207943598]
Reflectance Transformation Imaging (RTI) is a popular technique that allows the recovery of per-pixel reflectance information.
We propose a novel RTI method that can be carried out by recording videos with two ordinary smartphones.
arXiv Detail & Related papers (2022-10-18T13:00:22Z) - Neural Light Field Estimation for Street Scenes with Differentiable
Virtual Object Insertion [129.52943959497665]
Existing works on outdoor lighting estimation typically simplify the scene lighting into an environment map.
We propose a neural approach that estimates the 5D HDR light field from a single image.
We show the benefits of our AR object insertion in an autonomous driving application.
arXiv Detail & Related papers (2022-08-19T17:59:16Z) - DIB-R++: Learning to Predict Lighting and Material with a Hybrid
Differentiable Renderer [78.91753256634453]
We consider the challenging problem of predicting intrinsic object properties from a single image by exploiting differentiables.
In this work, we propose DIBR++, a hybrid differentiable which supports these effects by combining specularization and ray-tracing.
Compared to more advanced physics-based differentiables, DIBR++ is highly performant due to its compact and expressive model.
arXiv Detail & Related papers (2021-10-30T01:59:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.