Visual SLAM: What are the Current Trends and What to Expect?
- URL: http://arxiv.org/abs/2210.10491v1
- Date: Wed, 19 Oct 2022 11:56:32 GMT
- Title: Visual SLAM: What are the Current Trends and What to Expect?
- Authors: Ali Tourani, Hriday Bavle, Jose Luis Sanchez-Lopez, Holger Voos
- Abstract summary: Vision-based sensors have shown significant performance, accuracy, and efficiency gain in Simultaneous localization and Mapping (SLAM) systems.
We have given an in-depth literature survey of forty-five impactful papers published in the domain of VSLAMs.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Vision-based sensors have shown significant performance, accuracy, and
efficiency gain in Simultaneous Localization and Mapping (SLAM) systems in
recent years. In this regard, Visual Simultaneous Localization and Mapping
(VSLAM) methods refer to the SLAM approaches that employ cameras for pose
estimation and map generation. We can see many research works that demonstrated
VSLAMs can outperform traditional methods, which rely only on a particular
sensor, such as a Lidar, even with lower costs. VSLAM approaches utilize
different camera types (e.g., monocular, stereo, and RGB-D), have been tested
on various datasets (e.g., KITTI, TUM RGB-D, and EuRoC) and in dissimilar
environments (e.g., indoors and outdoors), and employ multiple algorithms and
methodologies to have a better understanding of the environment. The mentioned
variations have made this topic popular for researchers and resulted in a wide
range of VSLAMs methodologies. In this regard, the primary intent of this
survey is to present the recent advances in VSLAM systems, along with
discussing the existing challenges and trends. We have given an in-depth
literature survey of forty-five impactful papers published in the domain of
VSLAMs. We have classified these manuscripts by different characteristics,
including the novelty domain, objectives, employed algorithms, and semantic
level. We also discuss the current trends and future directions that may help
researchers investigate them.
Related papers
- Large Action Models: From Inception to Implementation [51.81485642442344]
Large Action Models (LAMs) are designed for action generation and execution within dynamic environments.
LAMs hold the potential to transform AI from passive language understanding to active task completion.
We present a comprehensive framework for developing LAMs, offering a systematic approach to their creation, from inception to deployment.
arXiv Detail & Related papers (2024-12-13T11:19:56Z) - Continual Learning with Pre-Trained Models: A Survey [61.97613090666247]
Continual Learning aims to overcome the catastrophic forgetting of former knowledge when learning new ones.
This paper presents a comprehensive survey of the latest advancements in PTM-based CL.
arXiv Detail & Related papers (2024-01-29T18:27:52Z) - Event-based Simultaneous Localization and Mapping: A Comprehensive Survey [52.73728442921428]
Review of event-based vSLAM algorithms that exploit the benefits of asynchronous and irregular event streams for localization and mapping tasks.
Paper categorizes event-based vSLAM methods into four main categories: feature-based, direct, motion-compensation, and deep learning methods.
arXiv Detail & Related papers (2023-04-19T16:21:14Z) - Common Practices and Taxonomy in Deep Multi-view Fusion for Remote
Sensing Applications [3.883984493622102]
Advances in remote sensing technologies have boosted applications for Earth observation.
Deep learning models have been applied to fuse the information from multiple views.
This article gathers works on multi-view fusion for Earth observation by focusing on the common practices and approaches used in the literature.
arXiv Detail & Related papers (2022-12-20T15:12:27Z) - Det-SLAM: A semantic visual SLAM for highly dynamic scenes using
Detectron2 [0.0]
This research combines the visual SLAM systems ORB-SLAM3 and Detectron2 to present the Det-SLAM system.
Det-SLAM is more resilient than previous dynamic SLAM systems and can lower the estimated error of camera posture in dynamic indoor scenarios.
arXiv Detail & Related papers (2022-10-01T13:25:11Z) - Semantic Visual Simultaneous Localization and Mapping: A Survey [18.372996585079235]
This paper first reviews the development of semantic vSLAM, explicitly focusing on its strengths and differences.
Secondly, we explore three main issues of semantic vSLAM: the extraction and association of semantic information, the application of semantic information, and the advantages of semantic vSLAM.
Finally, we discuss future directions that will provide a blueprint for the future development of semantic vSLAM.
arXiv Detail & Related papers (2022-09-14T05:45:26Z) - A Review on Visual-SLAM: Advancements from Geometric Modelling to
Learning-based Semantic Scene Understanding [3.0839245814393728]
Simultaneous Localisation and Mapping (SLAM) is one of the fundamental problems in autonomous mobile robots.
Visual-SLAM uses various sensors from the mobile robot for collecting and sensing a representation of the map.
Recent advancements in computer vision, such as deep learning techniques, have provided a data-driven approach to tackle the Visual-SLAM problem.
arXiv Detail & Related papers (2022-09-12T13:11:25Z) - NICE-SLAM: Neural Implicit Scalable Encoding for SLAM [112.6093688226293]
NICE-SLAM is a dense SLAM system that incorporates multi-level local information by introducing a hierarchical scene representation.
Compared to recent neural implicit SLAM systems, our approach is more scalable, efficient, and robust.
arXiv Detail & Related papers (2021-12-22T18:45:44Z) - Lighting the Darkness in the Deep Learning Era [118.35081853500411]
Low-light image enhancement (LLIE) aims at improving the perception or interpretability of an image captured in an environment with poor illumination.
Recent advances in this area are dominated by deep learning-based solutions.
We provide a comprehensive survey to cover various aspects ranging from algorithm taxonomy to unsolved open issues.
arXiv Detail & Related papers (2021-04-21T19:12:19Z) - LIFT-SLAM: a deep-learning feature-based monocular visual SLAM method [0.0]
We propose to combine the potential of deep learning-based feature descriptors with the traditional geometry-based VSLAM.
Experiments conducted on KITTI and Euroc datasets show that deep learning can be used to improve the performance of traditional VSLAM systems.
arXiv Detail & Related papers (2021-03-31T20:35:10Z) - AdaLAM: Revisiting Handcrafted Outlier Detection [106.38441616109716]
Local feature matching is a critical component of many computer vision pipelines.
We propose a hierarchical pipeline for effective outlier detection as well as integrate novel ideas which in sum lead to AdaLAM.
AdaLAM is designed to effectively exploit modern parallel hardware, resulting in a very fast, yet very accurate, outlier filter.
arXiv Detail & Related papers (2020-06-07T20:16:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.