NeB-SLAM: Neural Blocks-based Salable RGB-D SLAM for Unknown Scenes
- URL: http://arxiv.org/abs/2405.15151v1
- Date: Fri, 24 May 2024 02:11:45 GMT
- Title: NeB-SLAM: Neural Blocks-based Salable RGB-D SLAM for Unknown Scenes
- Authors: Lizhi Bai, Chunqi Tian, Jun Yang, Siyu Zhang, Weijian Liang,
- Abstract summary: NeB-SLAM is a neural block-based scalable RGB-D SLAM for unknown scenes.
We first propose a divide-and-conquer mapping strategy that represents the entire unknown scene as a set of sub-maps.
We then introduce an adaptive map growth strategy to achieve adaptive allocation of neural blocks during camera tracking.
- Score: 7.454659707039389
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural implicit representations have recently demonstrated considerable potential in the field of visual simultaneous localization and mapping (SLAM). This is due to their inherent advantages, including low storage overhead and representation continuity. However, these methods necessitate the size of the scene as input, which is impractical for unknown scenes. Consequently, we propose NeB-SLAM, a neural block-based scalable RGB-D SLAM for unknown scenes. Specifically, we first propose a divide-and-conquer mapping strategy that represents the entire unknown scene as a set of sub-maps. These sub-maps are a set of neural blocks of fixed size. Then, we introduce an adaptive map growth strategy to achieve adaptive allocation of neural blocks during camera tracking and gradually cover the whole unknown scene. Finally, extensive evaluations on various datasets demonstrate that our method is competitive in both mapping and tracking when targeting unknown environments.
Related papers
- NIS-SLAM: Neural Implicit Semantic RGB-D SLAM for 3D Consistent Scene Understanding [31.56016043635702]
We introduce NIS-SLAM, an efficient neural implicit semantic RGB-D SLAM system.
For high-fidelity surface reconstruction and spatial consistent scene understanding, we combine high-frequency multi-resolution tetrahedron-based features.
We also show that our approach can be used in augmented reality applications.
arXiv Detail & Related papers (2024-07-30T14:27:59Z) - DNS SLAM: Dense Neural Semantic-Informed SLAM [92.39687553022605]
DNS SLAM is a novel neural RGB-D semantic SLAM approach featuring a hybrid representation.
Our method integrates multi-view geometry constraints with image-based feature extraction to improve appearance details.
Our experimental results achieve state-of-the-art performance on both synthetic data and real-world data tracking.
arXiv Detail & Related papers (2023-11-30T21:34:44Z) - SCALAR-NeRF: SCAlable LARge-scale Neural Radiance Fields for Scene
Reconstruction [66.69049158826677]
We introduce SCALAR-NeRF, a novel framework tailored for scalable large-scale neural scene reconstruction.
We structure the neural representation as an encoder-decoder architecture, where the encoder processes 3D point coordinates to produce encoded features.
We propose an effective and efficient methodology to fuse the outputs from these local models to attain the final reconstruction.
arXiv Detail & Related papers (2023-11-28T10:18:16Z) - FMapping: Factorized Efficient Neural Field Mapping for Real-Time Dense
RGB SLAM [3.6985351289638957]
We introduce FMapping, an efficient neural field mapping framework that facilitates the continuous estimation of a colorized point cloud map in real-time dense RGB SLAM.
We propose an effective factorization scheme for scene representation and introduce a sliding window strategy to reduce the uncertainty for scene reconstruction.
arXiv Detail & Related papers (2023-06-01T11:51:46Z) - Point-SLAM: Dense Neural Point Cloud-based SLAM [61.96492935210654]
We propose a dense neural simultaneous localization and mapping (SLAM) approach for monocular RGBD input.
We demonstrate that both tracking and mapping can be performed with the same point-based neural scene representation.
arXiv Detail & Related papers (2023-04-09T16:48:26Z) - NEWTON: Neural View-Centric Mapping for On-the-Fly Large-Scale SLAM [51.21564182169607]
Newton is a view-centric mapping method that dynamically constructs neural fields based on run-time observation.
Our method enables camera pose updates using loop closures and scene boundary updates by representing the scene with multiple neural fields.
The experimental results demonstrate the superior performance of our method over existing world-centric neural field-based SLAM systems.
arXiv Detail & Related papers (2023-03-23T20:22:01Z) - Dense RGB SLAM with Neural Implicit Maps [34.37572307973734]
We present a dense RGB SLAM method with neural implicit map representation.
Our method simultaneously solves the camera motion and the neural implicit map by matching the rendered and input video frames.
Our method achieves favorable results than previous methods and even surpasses some recent RGB-D SLAM methods.
arXiv Detail & Related papers (2023-01-21T09:54:07Z) - Surface Normal Clustering for Implicit Representation of Manhattan
Scenes [67.16489078998961]
view synthesis and 3D modeling using implicit neural field representation are shown to be very effective for multi-view cameras.
Most existing methods that exploit additional supervision require dense pixel-wise labels or localized scene priors.
In this work, we aim to leverage the geometric prior of Manhattan scenes to improve the implicit neural radiance field representations.
arXiv Detail & Related papers (2022-12-02T17:46:55Z) - NICE-SLAM: Neural Implicit Scalable Encoding for SLAM [112.6093688226293]
NICE-SLAM is a dense SLAM system that incorporates multi-level local information by introducing a hierarchical scene representation.
Compared to recent neural implicit SLAM systems, our approach is more scalable, efficient, and robust.
arXiv Detail & Related papers (2021-12-22T18:45:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.