RGB-D Mapping and Tracking in a Plenoxel Radiance Field
- URL: http://arxiv.org/abs/2307.03404v2
- Date: Tue, 7 Nov 2023 04:18:43 GMT
- Title: RGB-D Mapping and Tracking in a Plenoxel Radiance Field
- Authors: Andreas L. Teigen, Yeonsoo Park, Annette Stahl, Rudolf Mester
- Abstract summary: We present the vital differences between view synthesis models and 3D reconstruction models.
We also comment on why a depth sensor is essential for modeling accurate geometry in general outward-facing scenes.
Our method achieves state-of-the-art results in both mapping and tracking tasks, while also being faster than competing neural network-based approaches.
- Score: 5.239559610798646
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The widespread adoption of Neural Radiance Fields (NeRFs) have ensured
significant advances in the domain of novel view synthesis in recent years.
These models capture a volumetric radiance field of a scene, creating highly
convincing, dense, photorealistic models through the use of simple,
differentiable rendering equations. Despite their popularity, these algorithms
suffer from severe ambiguities in visual data inherent to the RGB sensor, which
means that although images generated with view synthesis can visually appear
very believable, the underlying 3D model will often be wrong. This considerably
limits the usefulness of these models in practical applications like Robotics
and Extended Reality (XR), where an accurate dense 3D reconstruction otherwise
would be of significant value. In this paper, we present the vital differences
between view synthesis models and 3D reconstruction models. We also comment on
why a depth sensor is essential for modeling accurate geometry in general
outward-facing scenes using the current paradigm of novel view synthesis
methods. Focusing on the structure-from-motion task, we practically demonstrate
this need by extending the Plenoxel radiance field model: Presenting an
analytical differential approach for dense mapping and tracking with radiance
fields based on RGB-D data without a neural network. Our method achieves
state-of-the-art results in both mapping and tracking tasks, while also being
faster than competing neural network-based approaches. The code is available
at: https://github.com/ysus33/RGB-D_Plenoxel_Mapping_Tracking.git
Related papers
- DistillNeRF: Perceiving 3D Scenes from Single-Glance Images by Distilling Neural Fields and Foundation Model Features [65.8738034806085]
DistillNeRF is a self-supervised learning framework for understanding 3D environments in autonomous driving scenes.
Our method is a generalizable feedforward model that predicts a rich neural scene representation from sparse, single-frame multi-view camera inputs.
arXiv Detail & Related papers (2024-06-17T21:15:13Z) - UV Gaussians: Joint Learning of Mesh Deformation and Gaussian Textures for Human Avatar Modeling [71.87807614875497]
We propose UV Gaussians, which models the 3D human body by jointly learning mesh deformations and 2D UV-space Gaussian textures.
We collect and process a new dataset of human motion, which includes multi-view images, scanned models, parametric model registration, and corresponding texture maps. Experimental results demonstrate that our method achieves state-of-the-art synthesis of novel view and novel pose.
arXiv Detail & Related papers (2024-03-18T09:03:56Z) - Feature 3DGS: Supercharging 3D Gaussian Splatting to Enable Distilled Feature Fields [54.482261428543985]
Methods that use Neural Radiance fields are versatile for traditional tasks such as novel view synthesis.
3D Gaussian splatting has shown state-of-the-art performance on real-time radiance field rendering.
We propose architectural and training changes to efficiently avert this problem.
arXiv Detail & Related papers (2023-12-06T00:46:30Z) - NSLF-OL: Online Learning of Neural Surface Light Fields alongside
Real-time Incremental 3D Reconstruction [0.76146285961466]
The paper proposes a novel Neural Surface Light Fields model that copes with the small range of view directions while producing a good result in unseen directions.
Our model learns online the Neural Surface Light Fields (NSLF) aside from real-time 3D reconstruction with a sequential data stream as the shared input.
In addition to online training, our model also provides real-time rendering after completing the data stream for visualization.
arXiv Detail & Related papers (2023-04-29T15:41:15Z) - Learning Dynamic View Synthesis With Few RGBD Cameras [60.36357774688289]
We propose to utilize RGBD cameras to synthesize free-viewpoint videos of dynamic indoor scenes.
We generate point clouds from RGBD frames and then render them into free-viewpoint videos via a neural feature.
We introduce a simple Regional Depth-Inpainting module that adaptively inpaints missing depth values to render complete novel views.
arXiv Detail & Related papers (2022-04-22T03:17:35Z) - DRaCoN -- Differentiable Rasterization Conditioned Neural Radiance
Fields for Articulated Avatars [92.37436369781692]
We present DRaCoN, a framework for learning full-body volumetric avatars.
It exploits the advantages of both the 2D and 3D neural rendering techniques.
Experiments on the challenging ZJU-MoCap and Human3.6M datasets indicate that DRaCoN outperforms state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T17:59:15Z) - 3D-aware Image Synthesis via Learning Structural and Textural
Representations [39.681030539374994]
We propose VolumeGAN, for high-fidelity 3D-aware image synthesis, through explicitly learning a structural representation and a textural representation.
Our approach achieves sufficiently higher image quality and better 3D control than the previous methods.
arXiv Detail & Related papers (2021-12-20T18:59:40Z) - A Shading-Guided Generative Implicit Model for Shape-Accurate 3D-Aware
Image Synthesis [163.96778522283967]
We propose a shading-guided generative implicit model that is able to learn a starkly improved shape representation.
An accurate 3D shape should also yield a realistic rendering under different lighting conditions.
Our experiments on multiple datasets show that the proposed approach achieves photorealistic 3D-aware image synthesis.
arXiv Detail & Related papers (2021-10-29T10:53:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.