A Real-time Method for Inserting Virtual Objects into Neural Radiance
Fields
- URL: http://arxiv.org/abs/2310.05837v1
- Date: Mon, 9 Oct 2023 16:26:34 GMT
- Title: A Real-time Method for Inserting Virtual Objects into Neural Radiance
Fields
- Authors: Keyang Ye, Hongzhi Wu, Xin Tong, Kun Zhou
- Abstract summary: We present the first real-time method for inserting a rigid virtual object into a neural radiance field.
By exploiting the rich information about lighting and geometry in a NeRF, our method overcomes several challenges of object insertion in augmented reality.
- Score: 38.370278809341954
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present the first real-time method for inserting a rigid virtual object
into a neural radiance field, which produces realistic lighting and shadowing
effects, as well as allows interactive manipulation of the object. By
exploiting the rich information about lighting and geometry in a NeRF, our
method overcomes several challenges of object insertion in augmented reality.
For lighting estimation, we produce accurate, robust and 3D spatially-varying
incident lighting that combines the near-field lighting from NeRF and an
environment lighting to account for sources not covered by the NeRF. For
occlusion, we blend the rendered virtual object with the background scene using
an opacity map integrated from the NeRF. For shadows, with a precomputed field
of spherical signed distance field, we query the visibility term for any point
around the virtual object, and cast soft, detailed shadows onto 3D surfaces.
Compared with state-of-the-art techniques, our approach can insert virtual
object into scenes with superior fidelity, and has a great potential to be
further applied to augmented reality systems.
Related papers
- Relighting Scenes with Object Insertions in Neural Radiance Fields [24.18050535794117]
We propose a novel NeRF-based pipeline for inserting object NeRFs into scene NeRFs.
The proposed method achieves realistic relighting effects in extensive experimental evaluations.
arXiv Detail & Related papers (2024-06-21T00:58:58Z) - Spatiotemporally Consistent HDR Indoor Lighting Estimation [66.26786775252592]
We propose a physically-motivated deep learning framework to solve the indoor lighting estimation problem.
Given a single LDR image with a depth map, our method predicts spatially consistent lighting at any given image position.
Our framework achieves photorealistic lighting prediction with higher quality compared to state-of-the-art single-image or video-based methods.
arXiv Detail & Related papers (2023-05-07T20:36:29Z) - Neural Fields meet Explicit Geometric Representation for Inverse
Rendering of Urban Scenes [62.769186261245416]
We present a novel inverse rendering framework for large urban scenes capable of jointly reconstructing the scene geometry, spatially-varying materials, and HDR lighting from a set of posed RGB images with optional depth.
Specifically, we use a neural field to account for the primary rays, and use an explicit mesh (reconstructed from the underlying neural field) for modeling secondary rays that produce higher-order lighting effects such as cast shadows.
arXiv Detail & Related papers (2023-04-06T17:51:54Z) - Neural Light Field Estimation for Street Scenes with Differentiable
Virtual Object Insertion [129.52943959497665]
Existing works on outdoor lighting estimation typically simplify the scene lighting into an environment map.
We propose a neural approach that estimates the 5D HDR light field from a single image.
We show the benefits of our AR object insertion in an autonomous driving application.
arXiv Detail & Related papers (2022-08-19T17:59:16Z) - Learning Indoor Inverse Rendering with 3D Spatially-Varying Lighting [149.1673041605155]
We address the problem of jointly estimating albedo, normals, depth and 3D spatially-varying lighting from a single image.
Most existing methods formulate the task as image-to-image translation, ignoring the 3D properties of the scene.
We propose a unified, learning-based inverse framework that formulates 3D spatially-varying lighting.
arXiv Detail & Related papers (2021-09-13T15:29:03Z) - Object-based Illumination Estimation with Rendering-aware Neural
Networks [56.01734918693844]
We present a scheme for fast environment light estimation from the RGBD appearance of individual objects and their local image areas.
With the estimated lighting, virtual objects can be rendered in AR scenarios with shading that is consistent to the real scene.
arXiv Detail & Related papers (2020-08-06T08:23:19Z) - IllumiNet: Transferring Illumination from Planar Surfaces to Virtual
Objects in Augmented Reality [38.83696624634213]
This paper presents an illumination estimation method for virtual objects in real environment by learning.
Given a single RGB image, our method directly infers the relit virtual object by transferring the illumination features extracted from planar surfaces in the scene to the desired geometries.
arXiv Detail & Related papers (2020-07-12T13:11:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.