Monocular visual simultaneous localization and mapping: (r)evolution from geometry to deep learning-based pipelines
- URL: http://arxiv.org/abs/2503.02955v1
- Date: Tue, 04 Mar 2025 19:20:17 GMT
- Title: Monocular visual simultaneous localization and mapping: (r)evolution from geometry to deep learning-based pipelines
- Authors: Olaya Alvarez-Tunon, Yury Brodskiy, Erdal Kayacan,
- Abstract summary: This paper surveys the current state of visual SLAM algorithms according to the two main frameworks: geometry-based and learning-based SLAM.<n>We address two significant issues in surveying visual SLAM, providing a consistent classification of visual SLAM pipelines and (2) a robust evaluation of their performance under different deployment conditions.
- Score: 5.277598111323804
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: With the rise of deep learning, there is a fundamental change in visual SLAM algorithms toward developing different modules trained as end-to-end pipelines. However, regardless of the implementation domain, visual SLAM's performance is subject to diverse environmental challenges, such as dynamic elements in outdoor environments, harsh imaging conditions in underwater environments, or blurriness in high-speed setups. These environmental challenges need to be identified to study the real-world viability of SLAM implementations. Motivated by the aforementioned challenges, this paper surveys the current state of visual SLAM algorithms according to the two main frameworks: geometry-based and learning-based SLAM. First, we introduce a general formulation of the SLAM pipeline that includes most of the implementations in the literature. Second, those implementations are classified and surveyed for geometry and learning-based SLAM. After that, environment-specific challenges are formulated to enable experimental evaluation of the resilience of different visual SLAM classes to varying imaging conditions. We address two significant issues in surveying visual SLAM, providing (1) a consistent classification of visual SLAM pipelines and (2) a robust evaluation of their performance under different deployment conditions. Finally, we give our take on future opportunities for visual SLAM implementations.
Related papers
- VSLAM-LAB: A Comprehensive Framework for Visual SLAM Methods and Datasets [64.57742015099531]
VSLAM-LAB is a unified framework designed to streamline the development, evaluation, and deployment of VSLAM systems.
It enables seamless compilation and configuration of VSLAM algorithms, automated dataset downloading and preprocessing, and standardized experiment design, execution, and evaluation.
arXiv Detail & Related papers (2025-04-06T12:02:19Z) - Prismatic VLMs: Investigating the Design Space of Visually-Conditioned Language Models [73.40350756742231]
Visually-conditioned language models (VLMs) have seen growing adoption in applications such as visual dialogue, scene understanding, and robotic task planning.
Despite the volume of new releases, key design decisions around image preprocessing, architecture, and optimization are under-explored.
arXiv Detail & Related papers (2024-02-12T18:21:14Z) - DK-SLAM: Monocular Visual SLAM with Deep Keypoint Learning, Tracking and Loop-Closing [13.50980509878613]
Experimental evaluations on publicly available datasets demonstrate that DK-SLAM outperforms leading traditional and learning based SLAM systems.
Our system employs a Model-Agnostic Meta-Learning (MAML) strategy to optimize the training of keypoint extraction networks.
To mitigate cumulative positioning errors, DK-SLAM incorporates a novel online learning module that utilizes binary features for loop closure detection.
arXiv Detail & Related papers (2024-01-17T12:08:30Z) - DVI-SLAM: A Dual Visual Inertial SLAM Network [31.067716365926845]
This paper proposes a novel deep SLAM network with dual visual factors.
We show that the proposed network dynamically learns and adjusts the confidence maps of both visual factors.
Extensive experiments validate that our proposed method significantly outperforms the state-of-the-art methods on several public datasets.
arXiv Detail & Related papers (2023-09-25T01:42:54Z) - Style-Hallucinated Dual Consistency Learning: A Unified Framework for
Visual Domain Generalization [113.03189252044773]
We propose a unified framework, Style-HAllucinated Dual consistEncy learning (SHADE), to handle domain shift in various visual tasks.
Our versatile SHADE can significantly enhance the generalization in various visual recognition tasks, including image classification, semantic segmentation and object detection.
arXiv Detail & Related papers (2022-12-18T11:42:51Z) - NICE-SLAM: Neural Implicit Scalable Encoding for SLAM [112.6093688226293]
NICE-SLAM is a dense SLAM system that incorporates multi-level local information by introducing a hierarchical scene representation.
Compared to recent neural implicit SLAM systems, our approach is more scalable, efficient, and robust.
arXiv Detail & Related papers (2021-12-22T18:45:44Z) - LIFT-SLAM: a deep-learning feature-based monocular visual SLAM method [0.0]
We propose to combine the potential of deep learning-based feature descriptors with the traditional geometry-based VSLAM.
Experiments conducted on KITTI and Euroc datasets show that deep learning can be used to improve the performance of traditional VSLAM systems.
arXiv Detail & Related papers (2021-03-31T20:35:10Z) - Learning Monocular Depth in Dynamic Scenes via Instance-Aware Projection
Consistency [114.02182755620784]
We present an end-to-end joint training framework that explicitly models 6-DoF motion of multiple dynamic objects, ego-motion and depth in a monocular camera setup without supervision.
Our framework is shown to outperform the state-of-the-art depth and motion estimation methods.
arXiv Detail & Related papers (2021-02-04T14:26:42Z) - Early Bird: Loop Closures from Opposing Viewpoints for
Perceptually-Aliased Indoor Environments [35.663671249819124]
We present novel research that simultaneously addresses viewpoint change and perceptual aliasing.
We show that our integration of VPR with SLAM significantly boosts the performance of VPR, feature correspondence, and pose graph submodules.
For the first time, we demonstrate a localization system capable of state-of-the-art performance despite perceptual aliasing and extreme 180-degree-rotated viewpoint change.
arXiv Detail & Related papers (2020-10-03T20:18:55Z) - Learning to Explore using Active Neural SLAM [99.42064696897533]
This work presents a modular and hierarchical approach to learn policies for exploring 3D environments.
The proposed model can also be easily transferred to the PointGoal task and was the winning entry of the CVPR 2019 Habitat PointGoal Navigation Challenge.
arXiv Detail & Related papers (2020-04-10T17:57:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.