IllumiNet: Transferring Illumination from Planar Surfaces to Virtual
Objects in Augmented Reality
- URL: http://arxiv.org/abs/2007.05981v1
- Date: Sun, 12 Jul 2020 13:11:14 GMT
- Title: IllumiNet: Transferring Illumination from Planar Surfaces to Virtual
Objects in Augmented Reality
- Authors: Di Xu, Zhen Li, Yanning Zhang, Qi Cao
- Abstract summary: This paper presents an illumination estimation method for virtual objects in real environment by learning.
Given a single RGB image, our method directly infers the relit virtual object by transferring the illumination features extracted from planar surfaces in the scene to the desired geometries.
- Score: 38.83696624634213
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents an illumination estimation method for virtual objects in
real environment by learning. While previous works tackled this problem by
reconstructing high dynamic range (HDR) environment maps or the corresponding
spherical harmonics, we do not seek to recover the lighting environment of the
entire scene. Given a single RGB image, our method directly infers the relit
virtual object by transferring the illumination features extracted from planar
surfaces in the scene to the desired geometries. Compared to previous works,
our approach is more robust as it works in both indoor and outdoor environments
with spatially-varying illumination. Experiments and evaluation results show
that our approach outperforms the state-of-the-art quantitatively and
qualitatively, achieving realistic augmented experience.
Related papers
- PBIR-NIE: Glossy Object Capture under Non-Distant Lighting [30.325872237020395]
Glossy objects present a significant challenge for 3D reconstruction from multi-view input images under natural lighting.
We introduce PBIR-NIE, an inverse rendering framework designed to holistically capture the geometry, material attributes, and surrounding illumination of such objects.
arXiv Detail & Related papers (2024-08-13T13:26:24Z) - Relighting Scenes with Object Insertions in Neural Radiance Fields [24.18050535794117]
We propose a novel NeRF-based pipeline for inserting object NeRFs into scene NeRFs.
The proposed method achieves realistic relighting effects in extensive experimental evaluations.
arXiv Detail & Related papers (2024-06-21T00:58:58Z) - NieR: Normal-Based Lighting Scene Rendering [17.421326290704844]
NieR (Normal-Based Lighting Scene Rendering) is a novel framework that takes into account the nuances of light reflection on diverse material surfaces.
We present the LD (Light Decomposition) module, which captures the lighting reflection characteristics on surfaces.
We also propose the HNGD (Hierarchical Normal Gradient Densification) module to overcome the limitations of sparse Gaussian representation.
arXiv Detail & Related papers (2024-05-21T14:24:43Z) - A Real-time Method for Inserting Virtual Objects into Neural Radiance
Fields [38.370278809341954]
We present the first real-time method for inserting a rigid virtual object into a neural radiance field.
By exploiting the rich information about lighting and geometry in a NeRF, our method overcomes several challenges of object insertion in augmented reality.
arXiv Detail & Related papers (2023-10-09T16:26:34Z) - Spatiotemporally Consistent HDR Indoor Lighting Estimation [66.26786775252592]
We propose a physically-motivated deep learning framework to solve the indoor lighting estimation problem.
Given a single LDR image with a depth map, our method predicts spatially consistent lighting at any given image position.
Our framework achieves photorealistic lighting prediction with higher quality compared to state-of-the-art single-image or video-based methods.
arXiv Detail & Related papers (2023-05-07T20:36:29Z) - Learning to Relight Portrait Images via a Virtual Light Stage and
Synthetic-to-Real Adaptation [76.96499178502759]
Relighting aims to re-illuminate the person in the image as if the person appeared in an environment with the target lighting.
Recent methods rely on deep learning to achieve high-quality results.
We propose a new approach that can perform on par with the state-of-the-art (SOTA) relighting methods without requiring a light stage.
arXiv Detail & Related papers (2022-09-21T17:15:58Z) - Neural Light Field Estimation for Street Scenes with Differentiable
Virtual Object Insertion [129.52943959497665]
Existing works on outdoor lighting estimation typically simplify the scene lighting into an environment map.
We propose a neural approach that estimates the 5D HDR light field from a single image.
We show the benefits of our AR object insertion in an autonomous driving application.
arXiv Detail & Related papers (2022-08-19T17:59:16Z) - DIB-R++: Learning to Predict Lighting and Material with a Hybrid
Differentiable Renderer [78.91753256634453]
We consider the challenging problem of predicting intrinsic object properties from a single image by exploiting differentiables.
In this work, we propose DIBR++, a hybrid differentiable which supports these effects by combining specularization and ray-tracing.
Compared to more advanced physics-based differentiables, DIBR++ is highly performant due to its compact and expressive model.
arXiv Detail & Related papers (2021-10-30T01:59:39Z) - Object-based Illumination Estimation with Rendering-aware Neural
Networks [56.01734918693844]
We present a scheme for fast environment light estimation from the RGBD appearance of individual objects and their local image areas.
With the estimated lighting, virtual objects can be rendered in AR scenarios with shading that is consistent to the real scene.
arXiv Detail & Related papers (2020-08-06T08:23:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.