Visual-Inertial SLAM for Unstructured Outdoor Environments: Benchmarking the Benefits and Computational Costs of Loop Closing
- URL: http://arxiv.org/abs/2408.01716v2
- Date: Fri, 07 Mar 2025 21:40:36 GMT
- Title: Visual-Inertial SLAM for Unstructured Outdoor Environments: Benchmarking the Benefits and Computational Costs of Loop Closing
- Authors: Fabian Schmidt, Constantin Blessing, Markus Enzweiler, Abhinav Valada,
- Abstract summary: This paper benchmarks several open-source Visual-Inertial SLAM systems to evaluate their performance in unstructured natural outdoor settings.<n>We focus on the impact of loop closing on localization accuracy and computational demands.<n>The findings highlight the importance of loop closing in improving localization accuracy while managing computational resources efficiently.
- Score: 8.711135744156564
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Simultaneous Localization and Mapping (SLAM) is essential for mobile robotics, enabling autonomous navigation in dynamic, unstructured outdoor environments without relying on external positioning systems. These environments pose significant challenges due to variable lighting, weather conditions, and complex terrain. Visual-Inertial SLAM has emerged as a promising solution for robust localization under such conditions. This paper benchmarks several open-source Visual-Inertial SLAM systems, including traditional methods (ORB-SLAM3, VINS-Fusion, OpenVINS, Kimera, and SVO Pro) and learning-based approaches (HFNet-SLAM, AirSLAM), to evaluate their performance in unstructured natural outdoor settings. We focus on the impact of loop closing on localization accuracy and computational demands, providing a comprehensive analysis of these systems' effectiveness in real-world environments and especially their application to embedded systems in outdoor robotics. Our contributions further include an assessment of varying frame rates on localization accuracy and computational load. The findings highlight the importance of loop closing in improving localization accuracy while managing computational resources efficiently, offering valuable insights for optimizing Visual-Inertial SLAM systems for practical outdoor applications in mobile robotics. The dataset and the benchmark code are available under https://github.com/iis-esslingen/vi-slam_lc_benchmark.
Related papers
- VSLAM-LAB: A Comprehensive Framework for Visual SLAM Methods and Datasets [64.57742015099531]
VSLAM-LAB is a unified framework designed to streamline the development, evaluation, and deployment of VSLAM systems.
It enables seamless compilation and configuration of VSLAM algorithms, automated dataset downloading and preprocessing, and standardized experiment design, execution, and evaluation.
arXiv Detail & Related papers (2025-04-06T12:02:19Z) - NeRF and Gaussian Splatting SLAM in the Wild [9.516289996766059]
This study focuses on camera tracking accuracy, robustness to environmental factors, and computational efficiency, highlighting distinct trade-offs.
Neural SLAM methods achieve superior robustness, particularly under challenging conditions such as low light, but at a high computational cost.
Traditional methods perform the best across seasons but are highly sensitive to variations in lighting conditions.
arXiv Detail & Related papers (2024-12-04T12:11:19Z) - ROVER: A Multi-Season Dataset for Visual SLAM [7.296917102476635]
ROVER is a benchmark dataset for evaluating visual SLAM algorithms in diverse environmental conditions.
It covers 39 recordings across five outdoor locations, collected through all seasons and various lighting scenarios.
Results show that while stereo-inertial and RGBD configurations perform better under favorable lighting, most SLAM systems perform poorly in low-light and high-vegetation scenarios.
arXiv Detail & Related papers (2024-12-03T15:34:00Z) - Learning Where to Look: Self-supervised Viewpoint Selection for Active Localization using Geometrical Information [68.10033984296247]
This paper explores the domain of active localization, emphasizing the importance of viewpoint selection to enhance localization accuracy.
Our contributions involve using a data-driven approach with a simple architecture designed for real-time operation, a self-supervised data training method, and the capability to consistently integrate our map into a planning framework tailored for real-world robotics applications.
arXiv Detail & Related papers (2024-07-22T12:32:09Z) - NGD-SLAM: Towards Real-Time Dynamic SLAM without GPU [4.959552873584984]
This paper proposes an open-source real-time dynamic SLAM system that runs solely on CPU by incorporating a mask prediction mechanism.
Our system maintains high localization accuracy in dynamic environments while achieving a tracking frame rate of 56 FPS on a laptop CPU.
arXiv Detail & Related papers (2024-05-12T23:00:53Z) - Particle Filter SLAM for Vehicle Localization [2.45723043286596]
We address the challenges of SLAM by adopting the Particle Filter SLAM method.
Our approach leverages encoded data and fiber optic gyro (FOG) information to enable precise estimation of vehicle motion.
The integration of these data streams culminates in the establishment of a Particle Filter SLAM framework.
arXiv Detail & Related papers (2024-02-12T06:06:09Z) - DK-SLAM: Monocular Visual SLAM with Deep Keypoint Learning, Tracking and Loop-Closing [13.50980509878613]
Experimental evaluations on publicly available datasets demonstrate that DK-SLAM outperforms leading traditional and learning based SLAM systems.
Our system employs a Model-Agnostic Meta-Learning (MAML) strategy to optimize the training of keypoint extraction networks.
To mitigate cumulative positioning errors, DK-SLAM incorporates a novel online learning module that utilizes binary features for loop closure detection.
arXiv Detail & Related papers (2024-01-17T12:08:30Z) - Machine Learning-based Positioning using Multivariate Time Series
Classification for Factory Environments [0.0]
State-of-the-art solutions heavily rely on external infrastructures and are subject to potential privacy compromises.
Recent developments in machine learning (ML) offer solutions to address these limitations relying only on the data from onboard sensors of IoT devices.
This paper presents a machine learning-based indoor positioning system, using motion and ambient sensors, to localize a moving entity in privacy concerned factory environments.
arXiv Detail & Related papers (2023-08-22T10:07:19Z) - Efficiency Pentathlon: A Standardized Arena for Efficiency Evaluation [82.85015548989223]
Pentathlon is a benchmark for holistic and realistic evaluation of model efficiency.
Pentathlon focuses on inference, which accounts for a majority of the compute in a model's lifecycle.
It incorporates a suite of metrics that target different aspects of efficiency, including latency, throughput, memory overhead, and energy consumption.
arXiv Detail & Related papers (2023-07-19T01:05:33Z) - A Comparative Study of Machine Learning Algorithms for Anomaly Detection
in Industrial Environments: Performance and Environmental Impact [62.997667081978825]
This study seeks to address the demands of high-performance machine learning models with environmental sustainability.
Traditional machine learning algorithms, such as Decision Trees and Random Forests, demonstrate robust efficiency and performance.
However, superior outcomes were obtained with optimised configurations, albeit with a commensurate increase in resource consumption.
arXiv Detail & Related papers (2023-07-01T15:18:00Z) - Closing the loop: Autonomous experiments enabled by
machine-learning-based online data analysis in synchrotron beamline
environments [80.49514665620008]
Machine learning can be used to enhance research involving large or rapidly generated datasets.
In this study, we describe the incorporation of ML into a closed-loop workflow for X-ray reflectometry (XRR)
We present solutions that provide an elementary data analysis in real time during the experiment without introducing the additional software dependencies in the beamline control software environment.
arXiv Detail & Related papers (2023-06-20T21:21:19Z) - Using Detection, Tracking and Prediction in Visual SLAM to Achieve
Real-time Semantic Mapping of Dynamic Scenarios [70.70421502784598]
RDS-SLAM can build semantic maps at object level for dynamic scenarios in real time using only one commonly used Intel Core i7 CPU.
We evaluate RDS-SLAM in TUM RGB-D dataset, and experimental results show that RDS-SLAM can run with 30.3 ms per frame in dynamic scenarios.
arXiv Detail & Related papers (2022-10-10T11:03:32Z) - PLD-SLAM: A Real-Time Visual SLAM Using Points and Line Segments in
Dynamic Scenes [0.0]
This paper proposes a real-time stereo indirect visual SLAM system, PLD-SLAM, which combines point and line features.
We also present a novel global gray similarity (GGS) algorithm to achieve reasonable selection and efficient loop closure detection.
arXiv Detail & Related papers (2022-07-22T07:40:00Z) - Optical flow-based branch segmentation for complex orchard environments [73.11023209243326]
We train a neural network system in simulation only using simulated RGB data and optical flow.
This resulting neural network is able to perform foreground segmentation of branches in a busy orchard environment without additional real-world training or using any special setup or equipment beyond a standard camera.
Our results show that our system is highly accurate and, when compared to a network using manually labeled RGBD data, achieves significantly more consistent and robust performance across environments that differ from the training set.
arXiv Detail & Related papers (2022-02-26T03:38:20Z) - Real-time Outdoor Localization Using Radio Maps: A Deep Learning
Approach [59.17191114000146]
LocUNet: A convolutional, end-to-end trained neural network (NN) for the localization task.
We show that LocUNet can localize users with state-of-the-art accuracy and enjoys high robustness to inaccuracies in the estimations of radio maps.
arXiv Detail & Related papers (2021-06-23T17:27:04Z) - Indoor Point-to-Point Navigation with Deep Reinforcement Learning and
Ultra-wideband [1.6799377888527687]
Moving obstacles and non-line-of-sight occurrences can generate noisy and unreliable signals.
We show how a power-efficient point-to-point local planner, learnt with deep reinforcement learning (RL), can constitute a robust and resilient to noise short-range guidance system complete solution.
Our results show that the computational efficient end-to-end policy learnt in plain simulation, can provide a robust, scalable and at-the-edge low-cost navigation system solution.
arXiv Detail & Related papers (2020-11-18T12:30:36Z) - Pushing the Envelope of Rotation Averaging for Visual SLAM [69.7375052440794]
We propose a novel optimization backbone for visual SLAM systems.
We leverage averaging to improve the accuracy, efficiency and robustness of conventional monocular SLAM systems.
Our approach can exhibit up to 10x faster with comparable accuracy against the state-art on public benchmarks.
arXiv Detail & Related papers (2020-11-02T18:02:26Z) - SLAM in the Field: An Evaluation of Monocular Mapping and Localization
on Challenging Dynamic Agricultural Environment [12.666030953871186]
This paper demonstrates a system capable of combining a sparse, indirect, monocular visual SLAM, with both offline and real-time Multi-View Stereo (MVS) reconstruction algorithms.
The use of a monocular SLAM makes our system much easier to integrate with an existing device, as we do not rely on a LiDAR.
arXiv Detail & Related papers (2020-11-02T16:53:35Z) - Object-based Illumination Estimation with Rendering-aware Neural
Networks [56.01734918693844]
We present a scheme for fast environment light estimation from the RGBD appearance of individual objects and their local image areas.
With the estimated lighting, virtual objects can be rendered in AR scenarios with shading that is consistent to the real scene.
arXiv Detail & Related papers (2020-08-06T08:23:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.