Differentiable SLAM-net: Learning Particle SLAM for Visual Navigation
- URL: http://arxiv.org/abs/2105.07593v2
- Date: Wed, 19 May 2021 14:12:02 GMT
- Title: Differentiable SLAM-net: Learning Particle SLAM for Visual Navigation
- Authors: Peter Karkus, Shaojun Cai, David Hsu
- Abstract summary: SLAM-net encodes a particle filter based SLAM algorithm in a differentiable graph.
It learns task-oriented neural network components by backpropagating through the SLAM algorithm.
It significantly outperforms the widely adapted ORB-SLAM in noisy conditions.
- Score: 15.677860200178959
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Simultaneous localization and mapping (SLAM) remains challenging for a number
of downstream applications, such as visual robot navigation, because of rapid
turns, featureless walls, and poor camera quality. We introduce the
Differentiable SLAM Network (SLAM-net) along with a navigation architecture to
enable planar robot navigation in previously unseen indoor environments.
SLAM-net encodes a particle filter based SLAM algorithm in a differentiable
computation graph, and learns task-oriented neural network components by
backpropagating through the SLAM algorithm. Because it can optimize all model
components jointly for the end-objective, SLAM-net learns to be robust in
challenging conditions. We run experiments in the Habitat platform with
different real-world RGB and RGB-D datasets. SLAM-net significantly outperforms
the widely adapted ORB-SLAM in noisy conditions. Our navigation architecture
with SLAM-net improves the state-of-the-art for the Habitat Challenge 2020
PointNav task by a large margin (37% to 64% success). Project website:
http://sites.google.com/view/slamnet
Related papers
- GlORIE-SLAM: Globally Optimized RGB-only Implicit Encoding Point Cloud SLAM [53.6402869027093]
We propose an efficient RGB-only dense SLAM system using a flexible neural point cloud representation scene.
We also introduce a novel DSPO layer for bundle adjustment which optimize the pose and depth of implicits along with the scale of the monocular depth.
arXiv Detail & Related papers (2024-03-28T16:32:06Z) - Loopy-SLAM: Dense Neural SLAM with Loop Closures [53.11936461015725]
We introduce Loopy-SLAM that globally optimize poses and the dense 3D model.
We use frame-to-model tracking using a data-driven point-based submap generation method and trigger loop closures online by performing global place recognition.
Evaluation on the synthetic Replica and real-world TUM-RGBD and ScanNet datasets demonstrate competitive or superior performance in tracking, mapping, and rendering accuracy when compared to existing dense neural RGBD SLAM methods.
arXiv Detail & Related papers (2024-02-14T18:18:32Z) - UncLe-SLAM: Uncertainty Learning for Dense Neural SLAM [60.575435353047304]
We present an uncertainty learning framework for dense neural simultaneous localization and mapping (SLAM)
We propose an online framework for sensor uncertainty estimation that can be trained in a self-supervised manner from only 2D input data.
arXiv Detail & Related papers (2023-06-19T16:26:25Z) - Co-SLAM: Joint Coordinate and Sparse Parametric Encodings for Neural
Real-Time SLAM [14.56883275492083]
Co-SLAM is an RGB-D SLAM system based on a hybrid representation.
It performs robust camera tracking and high-fidelity surface reconstruction in real time.
arXiv Detail & Related papers (2023-04-27T17:46:45Z) - ESLAM: Efficient Dense SLAM System Based on Hybrid Representation of
Signed Distance Fields [2.0625936401496237]
ESLAM reads RGB-D frames with unknown camera poses in a sequential manner and incrementally reconstructs the scene representation.
ESLAM improves the accuracy of 3D reconstruction and camera localization of state-of-the-art dense visual SLAM methods by more than 50%.
arXiv Detail & Related papers (2022-11-21T18:25:14Z) - Using Detection, Tracking and Prediction in Visual SLAM to Achieve
Real-time Semantic Mapping of Dynamic Scenarios [70.70421502784598]
RDS-SLAM can build semantic maps at object level for dynamic scenarios in real time using only one commonly used Intel Core i7 CPU.
We evaluate RDS-SLAM in TUM RGB-D dataset, and experimental results show that RDS-SLAM can run with 30.3 ms per frame in dynamic scenarios.
arXiv Detail & Related papers (2022-10-10T11:03:32Z) - Orbeez-SLAM: A Real-time Monocular Visual SLAM with ORB Features and
NeRF-realized Mapping [18.083667773491083]
We develop a visual SLAM that adapts to new scenes without pre-training and generates dense maps for downstream tasks in real-time.
Orbeez-SLAM collaborates with implicit neural representation (NeRF) and visual odometry to achieve our goals.
Results show that our SLAM is up to 800x faster than the strong baseline with superior rendering outcomes.
arXiv Detail & Related papers (2022-09-27T09:37:57Z) - NICE-SLAM: Neural Implicit Scalable Encoding for SLAM [112.6093688226293]
NICE-SLAM is a dense SLAM system that incorporates multi-level local information by introducing a hierarchical scene representation.
Compared to recent neural implicit SLAM systems, our approach is more scalable, efficient, and robust.
arXiv Detail & Related papers (2021-12-22T18:45:44Z) - OV$^{2}$SLAM : A Fully Online and Versatile Visual SLAM for Real-Time
Applications [59.013743002557646]
We describe OV$2$SLAM, a fully online algorithm, handling both monocular and stereo camera setups, various map scales and frame-rates ranging from a few Hertz up to several hundreds.
For the benefit of the community, we release the source code: urlhttps://github.com/ov2slam/ov2slam.
arXiv Detail & Related papers (2021-02-08T08:39:23Z) - A Hybrid Learner for Simultaneous Localization and Mapping [2.1041384320978267]
Simultaneous localization and mapping (SLAM) is used to predict the dynamic motion path of a moving platform.
This work introduces a hybrid learning model that explores beyond feature fusion.
It carries out weight enhancement of the front end feature extractor of the SLAM via mutation of different deep networks' top layers.
The trajectory predictions from independently trained models are amalgamated to refine the location detail.
arXiv Detail & Related papers (2021-01-04T18:41:09Z) - DXSLAM: A Robust and Efficient Visual SLAM System with Deep Features [5.319556638040589]
This paper shows that feature extraction with deep convolutional neural networks (CNNs) can be seamlessly incorporated into a modern SLAM framework.
The proposed SLAM system utilizes a state-of-the-art CNN to detect keypoints in each image frame, and to give not only keypoint descriptors, but also a global descriptor of the whole image.
arXiv Detail & Related papers (2020-08-12T16:14:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.