Ternary-type Opacity and Hybrid Odometry for RGB-only NeRF-SLAM
- URL: http://arxiv.org/abs/2312.13332v2
- Date: Fri, 22 Dec 2023 18:32:55 GMT
- Title: Ternary-type Opacity and Hybrid Odometry for RGB-only NeRF-SLAM
- Authors: Junru Lin, Asen Nachkov, Songyou Peng, Luc Van Gool, Danda Pani Paudel
- Abstract summary: We study why ternary-type opacity is well-suited and desired for the task at hand.
We propose a simple yet novel visual odometry scheme that uses a hybrid combination of volumetric and warping-based image renderings.
- Score: 62.23809541385653
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The opacity of rigid 3D scenes with opaque surfaces is considered to be of a
binary type. However, we observed that this property is not followed by the
existing RGB-only NeRF-SLAM. Therefore, we are motivated to introduce this
prior into the RGB-only NeRF-SLAM pipeline. Unfortunately, the optimization
through the volumetric rendering function does not facilitate easy integration
of the desired prior. Instead, we observed that the opacity of ternary-type
(TT) is well supported. In this work, we study why ternary-type opacity is
well-suited and desired for the task at hand. In particular, we provide
theoretical insights into the process of jointly optimizing radiance and
opacity through the volumetric rendering process. Through exhaustive
experiments on benchmark datasets, we validate our claim and provide insights
into the optimization process, which we believe will unleash the potential of
RGB-only NeRF-SLAM. To foster this line of research, we also propose a simple
yet novel visual odometry scheme that uses a hybrid combination of volumetric
and warping-based image renderings. More specifically, the proposed hybrid
odometry (HO) additionally uses image warping-based coarse odometry, leading up
to an order of magnitude final speed-up. Furthermore, we show that the proposed
TT and HO well complement each other, offering state-of-the-art results on
benchmark datasets in terms of both speed and accuracy.
Related papers
- HI-SLAM2: Geometry-Aware Gaussian SLAM for Fast Monocular Scene Reconstruction [38.47566815670662]
HI-SLAM2 is a geometry-aware Gaussian SLAM system that achieves fast and accurate monocular scene reconstruction using only RGB input.
We demonstrate significant improvements over existing Neural SLAM methods and even surpass RGB-D-based methods in both reconstruction and rendering quality.
arXiv Detail & Related papers (2024-11-27T01:39:21Z) - RaNeuS: Ray-adaptive Neural Surface Reconstruction [87.20343320266215]
We leverage a differentiable radiance field eg NeRF to reconstruct detailed 3D surfaces in addition to producing novel view renderings.
Considering that different methods formulate and optimize the projection from SDF to radiance field with a globally constant Eikonal regularization, we improve with a ray-wise weighting factor.
Our proposed textitRaNeuS are extensively evaluated on both synthetic and real datasets.
arXiv Detail & Related papers (2024-06-14T07:54:25Z) - GlORIE-SLAM: Globally Optimized RGB-only Implicit Encoding Point Cloud SLAM [53.6402869027093]
We propose an efficient RGB-only dense SLAM system using a flexible neural point cloud representation scene.
We also introduce a novel DSPO layer for bundle adjustment which optimize the pose and depth of implicits along with the scale of the monocular depth.
arXiv Detail & Related papers (2024-03-28T16:32:06Z) - CVT-xRF: Contrastive In-Voxel Transformer for 3D Consistent Radiance Fields from Sparse Inputs [65.80187860906115]
We propose a novel approach to improve NeRF's performance with sparse inputs.
We first adopt a voxel-based ray sampling strategy to ensure that the sampled rays intersect with a certain voxel in 3D space.
We then randomly sample additional points within the voxel and apply a Transformer to infer the properties of other points on each ray, which are then incorporated into the volume rendering.
arXiv Detail & Related papers (2024-03-25T15:56:17Z) - Hi-Map: Hierarchical Factorized Radiance Field for High-Fidelity
Monocular Dense Mapping [51.739466714312805]
We introduce Hi-Map, a novel monocular dense mapping approach based on Neural Radiance Field (NeRF)
Hi-Map is exceptional in its capacity to achieve efficient and high-fidelity mapping using only posed RGB inputs.
arXiv Detail & Related papers (2024-01-06T12:32:25Z) - HI-SLAM: Monocular Real-time Dense Mapping with Hybrid Implicit Fields [11.627951040865568]
Recent neural mapping frameworks show promising results, but rely on RGB-D or pose inputs, or cannot run in real-time.
Our approach integrates dense-SLAM with neural implicit fields.
For efficient construction of neural fields, we employ multi-resolution grid encoding and signed distance function.
arXiv Detail & Related papers (2023-10-07T12:26:56Z) - DaRF: Boosting Radiance Fields from Sparse Inputs with Monocular Depth
Adaptation [31.655818586634258]
We propose a novel framework, dubbed D"aRF, that achieves robust NeRF reconstruction with a handful of real-world images.
Our framework imposes the MDE network's powerful geometry prior to NeRF representation at both seen and unseen viewpoints.
In addition, we overcome the ambiguity problems of monocular depths through patch-wise scale-shift fitting and geometry distillation.
arXiv Detail & Related papers (2023-05-30T16:46:41Z) - Learning Neural Duplex Radiance Fields for Real-Time View Synthesis [33.54507228895688]
We propose a novel approach to distill and bake NeRFs into highly efficient mesh-based neural representations.
We demonstrate the effectiveness and superiority of our approach via extensive experiments on a range of standard datasets.
arXiv Detail & Related papers (2023-04-20T17:59:52Z) - NICER-SLAM: Neural Implicit Scene Encoding for RGB SLAM [111.83168930989503]
NICER-SLAM is a dense RGB SLAM system that simultaneously optimize for camera poses and a hierarchical neural implicit map representation.
We show strong performance in dense mapping, tracking, and novel view synthesis, even competitive with recent RGB-D SLAM systems.
arXiv Detail & Related papers (2023-02-07T17:06:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.