Orbeez-SLAM: A Real-time Monocular Visual SLAM with ORB Features and
NeRF-realized Mapping
- URL: http://arxiv.org/abs/2209.13274v1
- Date: Tue, 27 Sep 2022 09:37:57 GMT
- Title: Orbeez-SLAM: A Real-time Monocular Visual SLAM with ORB Features and
NeRF-realized Mapping
- Authors: Chi-Ming Chung, Yang-Che Tseng, Ya-Ching Hsu, Xiang-Qian Shi, Yun-Hung
Hua, Jia-Fong Yeh, Wen-Chin Chen, Yi-Ting Chen and Winston H. Hsu
- Abstract summary: We develop a visual SLAM that adapts to new scenes without pre-training and generates dense maps for downstream tasks in real-time.
Orbeez-SLAM collaborates with implicit neural representation (NeRF) and visual odometry to achieve our goals.
Results show that our SLAM is up to 800x faster than the strong baseline with superior rendering outcomes.
- Score: 18.083667773491083
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A spatial AI that can perform complex tasks through visual signals and
cooperate with humans is highly anticipated. To achieve this, we need a visual
SLAM that easily adapts to new scenes without pre-training and generates dense
maps for downstream tasks in real-time. None of the previous learning-based and
non-learning-based visual SLAMs satisfy all needs due to the intrinsic
limitations of their components. In this work, we develop a visual SLAM named
Orbeez-SLAM, which successfully collaborates with implicit neural
representation (NeRF) and visual odometry to achieve our goals. Moreover,
Orbeez-SLAM can work with the monocular camera since it only needs RGB inputs,
making it widely applicable to the real world. We validate its effectiveness on
various challenging benchmarks. Results show that our SLAM is up to 800x faster
than the strong baseline with superior rendering outcomes.
Related papers
- GlORIE-SLAM: Globally Optimized RGB-only Implicit Encoding Point Cloud SLAM [53.6402869027093]
We propose an efficient RGB-only dense SLAM system using a flexible neural point cloud representation scene.
We also introduce a novel DSPO layer for bundle adjustment which optimize the pose and depth of implicits along with the scale of the monocular depth.
arXiv Detail & Related papers (2024-03-28T16:32:06Z) - DNS SLAM: Dense Neural Semantic-Informed SLAM [92.39687553022605]
DNS SLAM is a novel neural RGB-D semantic SLAM approach featuring a hybrid representation.
Our method integrates multi-view geometry constraints with image-based feature extraction to improve appearance details.
Our experimental results achieve state-of-the-art performance on both synthetic data and real-world data tracking.
arXiv Detail & Related papers (2023-11-30T21:34:44Z) - Co-SLAM: Joint Coordinate and Sparse Parametric Encodings for Neural
Real-Time SLAM [14.56883275492083]
Co-SLAM is an RGB-D SLAM system based on a hybrid representation.
It performs robust camera tracking and high-fidelity surface reconstruction in real time.
arXiv Detail & Related papers (2023-04-27T17:46:45Z) - NICER-SLAM: Neural Implicit Scene Encoding for RGB SLAM [111.83168930989503]
NICER-SLAM is a dense RGB SLAM system that simultaneously optimize for camera poses and a hierarchical neural implicit map representation.
We show strong performance in dense mapping, tracking, and novel view synthesis, even competitive with recent RGB-D SLAM systems.
arXiv Detail & Related papers (2023-02-07T17:06:34Z) - RANA: Relightable Articulated Neural Avatars [83.60081895984634]
We propose RANA, a relightable and articulated neural avatar for the photorealistic synthesis of humans.
We present a novel framework to model humans while disentangling their geometry, texture, and also lighting environment from monocular RGB videos.
arXiv Detail & Related papers (2022-12-06T18:59:31Z) - ESLAM: Efficient Dense SLAM System Based on Hybrid Representation of
Signed Distance Fields [2.0625936401496237]
ESLAM reads RGB-D frames with unknown camera poses in a sequential manner and incrementally reconstructs the scene representation.
ESLAM improves the accuracy of 3D reconstruction and camera localization of state-of-the-art dense visual SLAM methods by more than 50%.
arXiv Detail & Related papers (2022-11-21T18:25:14Z) - iDF-SLAM: End-to-End RGB-D SLAM with Neural Implicit Mapping and Deep
Feature Tracking [4.522666263036414]
We propose a novel end-to-end RGB-D SLAM, iDF-SLAM, which adopts a feature-based deep neural tracker as the front-end and a NeRF-style neural implicit mapper as the back-end.
The proposed iDF-SLAM demonstrates state-of-the-art results in terms of scene reconstruction and competitive performance in camera tracking.
arXiv Detail & Related papers (2022-09-16T13:32:57Z) - RWT-SLAM: Robust Visual SLAM for Highly Weak-textured Environments [1.1024591739346294]
We propose a novel visual SLAM system named RWT-SLAM to tackle this problem.
We modify LoFTR network which is able to produce dense point matching under low-textured scenes to generate feature descriptors.
The resulting RWT-SLAM is tested in various public datasets such as TUM and OpenLORIS.
arXiv Detail & Related papers (2022-07-07T19:24:03Z) - NICE-SLAM: Neural Implicit Scalable Encoding for SLAM [112.6093688226293]
NICE-SLAM is a dense SLAM system that incorporates multi-level local information by introducing a hierarchical scene representation.
Compared to recent neural implicit SLAM systems, our approach is more scalable, efficient, and robust.
arXiv Detail & Related papers (2021-12-22T18:45:44Z) - Differentiable SLAM-net: Learning Particle SLAM for Visual Navigation [15.677860200178959]
SLAM-net encodes a particle filter based SLAM algorithm in a differentiable graph.
It learns task-oriented neural network components by backpropagating through the SLAM algorithm.
It significantly outperforms the widely adapted ORB-SLAM in noisy conditions.
arXiv Detail & Related papers (2021-05-17T03:54:34Z) - OV$^{2}$SLAM : A Fully Online and Versatile Visual SLAM for Real-Time
Applications [59.013743002557646]
We describe OV$2$SLAM, a fully online algorithm, handling both monocular and stereo camera setups, various map scales and frame-rates ranging from a few Hertz up to several hundreds.
For the benefit of the community, we release the source code: urlhttps://github.com/ov2slam/ov2slam.
arXiv Detail & Related papers (2021-02-08T08:39:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.