LIFT-SLAM: a deep-learning feature-based monocular visual SLAM method
- URL: http://arxiv.org/abs/2104.00099v1
- Date: Wed, 31 Mar 2021 20:35:10 GMT
- Title: LIFT-SLAM: a deep-learning feature-based monocular visual SLAM method
- Authors: Hudson M. S. Bruno and Esther L. Colombini
- Abstract summary: We propose to combine the potential of deep learning-based feature descriptors with the traditional geometry-based VSLAM.
Experiments conducted on KITTI and Euroc datasets show that deep learning can be used to improve the performance of traditional VSLAM systems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Simultaneous Localization and Mapping (SLAM) problem addresses the
possibility of a robot to localize itself in an unknown environment and
simultaneously build a consistent map of this environment. Recently, cameras
have been successfully used to get the environment's features to perform SLAM,
which is referred to as visual SLAM (VSLAM). However, classical VSLAM
algorithms can be easily induced to fail when either the motion of the robot or
the environment is too challenging. Although new approaches based on Deep
Neural Networks (DNNs) have achieved promising results in VSLAM, they still are
unable to outperform traditional methods. To leverage the robustness of deep
learning to enhance traditional VSLAM systems, we propose to combine the
potential of deep learning-based feature descriptors with the traditional
geometry-based VSLAM, building a new VSLAM system called LIFT-SLAM. Experiments
conducted on KITTI and Euroc datasets show that deep learning can be used to
improve the performance of traditional VSLAM systems, as the proposed approach
was able to achieve results comparable to the state-of-the-art while being
robust to sensorial noise. We enhance the proposed VSLAM pipeline by avoiding
parameter tuning for specific datasets with an adaptive approach while
evaluating how transfer learning can affect the quality of the features
extracted.
Related papers
- Light-SLAM: A Robust Deep-Learning Visual SLAM System Based on LightGlue under Challenging Lighting Conditions [2.0397901353954806]
We propose a novel hybrid system for visual SLAM based on the LightGlue deep learning network.
We have combined traditional geometry-based approaches to introduce a complete visual SLAM system for monocular, binocular, and RGB-D sensors.
The experimental results show that the proposed method exhibits better accuracy and robustness in adapting to low-light and strongly light-varying environments.
arXiv Detail & Related papers (2024-05-10T10:54:03Z) - DK-SLAM: Monocular Visual SLAM with Deep Keypoint Learning, Tracking and Loop-Closing [13.50980509878613]
Experimental evaluations on publicly available datasets demonstrate that DK-SLAM outperforms leading traditional and learning based SLAM systems.
Our system employs a Model-Agnostic Meta-Learning (MAML) strategy to optimize the training of keypoint extraction networks.
To mitigate cumulative positioning errors, DK-SLAM incorporates a novel online learning module that utilizes binary features for loop closure detection.
arXiv Detail & Related papers (2024-01-17T12:08:30Z) - DDN-SLAM: Real-time Dense Dynamic Neural Implicit SLAM [5.267859554944985]
We introduce DDN-SLAM, the first real-time dense dynamic neural implicit SLAM system integrating semantic features.
Compared to existing neural implicit SLAM systems, the tracking results on dynamic datasets indicate an average 90% improvement in Average Trajectory Error (ATE) accuracy.
arXiv Detail & Related papers (2024-01-03T05:42:17Z) - DNS SLAM: Dense Neural Semantic-Informed SLAM [92.39687553022605]
DNS SLAM is a novel neural RGB-D semantic SLAM approach featuring a hybrid representation.
Our method integrates multi-view geometry constraints with image-based feature extraction to improve appearance details.
Our experimental results achieve state-of-the-art performance on both synthetic data and real-world data tracking.
arXiv Detail & Related papers (2023-11-30T21:34:44Z) - Differentiable SLAM Helps Deep Learning-based LiDAR Perception Tasks [2.753469462596694]
We investigate a new paradigm that uses differentiable SLAM architectures in a self-supervised manner to train end-to-end deep learning models in various LiDAR based applications.
We demonstrate that this new paradigm of using SLAM Loss signal while training LiDAR based models can be easily adopted by the community.
arXiv Detail & Related papers (2023-09-17T08:24:16Z) - NICE-SLAM: Neural Implicit Scalable Encoding for SLAM [112.6093688226293]
NICE-SLAM is a dense SLAM system that incorporates multi-level local information by introducing a hierarchical scene representation.
Compared to recent neural implicit SLAM systems, our approach is more scalable, efficient, and robust.
arXiv Detail & Related papers (2021-12-22T18:45:44Z) - Learning to Continuously Optimize Wireless Resource in a Dynamic
Environment: A Bilevel Optimization Perspective [52.497514255040514]
This work develops a new approach that enables data-driven methods to continuously learn and optimize resource allocation strategies in a dynamic environment.
We propose to build the notion of continual learning into wireless system design, so that the learning model can incrementally adapt to the new episodes.
Our design is based on a novel bilevel optimization formulation which ensures certain fairness" across different data samples.
arXiv Detail & Related papers (2021-05-03T07:23:39Z) - A comparative evaluation of learned feature descriptors on hybrid
monocular visual SLAM methods [0.0]
We compare the performance of hybrid monocular VSLAM methods with different learned feature descriptors.
Experiments conducted on KITTI and Euroc MAV datasets confirm that learned feature descriptors can create more robust VSLAM systems.
arXiv Detail & Related papers (2021-03-31T19:56:32Z) - Learning to Continuously Optimize Wireless Resource In Episodically
Dynamic Environment [55.91291559442884]
This work develops a methodology that enables data-driven methods to continuously learn and optimize in a dynamic environment.
We propose to build the notion of continual learning into the modeling process of learning wireless systems.
Our design is based on a novel min-max formulation which ensures certain fairness" across different data samples.
arXiv Detail & Related papers (2020-11-16T08:24:34Z) - Pushing the Envelope of Rotation Averaging for Visual SLAM [69.7375052440794]
We propose a novel optimization backbone for visual SLAM systems.
We leverage averaging to improve the accuracy, efficiency and robustness of conventional monocular SLAM systems.
Our approach can exhibit up to 10x faster with comparable accuracy against the state-art on public benchmarks.
arXiv Detail & Related papers (2020-11-02T18:02:26Z) - Learning to Explore using Active Neural SLAM [99.42064696897533]
This work presents a modular and hierarchical approach to learn policies for exploring 3D environments.
The proposed model can also be easily transferred to the PointGoal task and was the winning entry of the CVPR 2019 Habitat PointGoal Navigation Challenge.
arXiv Detail & Related papers (2020-04-10T17:57:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.