ROVER: A Multi-Season Dataset for Visual SLAM
- URL: http://arxiv.org/abs/2412.02506v1
- Date: Tue, 03 Dec 2024 15:34:00 GMT
- Title: ROVER: A Multi-Season Dataset for Visual SLAM
- Authors: Fabian Schmidt, Constantin Blessing, Markus Enzweiler, Abhinav Valada,
- Abstract summary: ROVER is a benchmark dataset tailored for evaluating visual SLAM algorithms under diverse environmental conditions.
It covers 39 recordings across five outdoor locations, collected through all seasons and various lighting scenarios.
Results demonstrate that while stereo-inertial and RGB-D configurations generally perform better under favorable lighting and moderate vegetation, most SLAM systems perform poorly in low-light and high-vegetation scenarios.
- Score: 8.711135744156564
- License:
- Abstract: Robust Simultaneous Localization and Mapping (SLAM) is a crucial enabler for autonomous navigation in natural, unstructured environments such as parks and gardens. However, these environments present unique challenges for SLAM due to frequent seasonal changes, varying light conditions, and dense vegetation. These factors often degrade the performance of visual SLAM algorithms originally developed for structured urban environments. To address this gap, we present ROVER, a comprehensive benchmark dataset tailored for evaluating visual SLAM algorithms under diverse environmental conditions and spatial configurations. We captured the dataset with a robotic platform equipped with monocular, stereo, and RGB-D cameras, as well as inertial sensors. It covers 39 recordings across five outdoor locations, collected through all seasons and various lighting scenarios, i.e., day, dusk, and night with and without external lighting. With this novel dataset, we evaluate several traditional and deep learning-based SLAM methods and study their performance in diverse challenging conditions. The results demonstrate that while stereo-inertial and RGB-D configurations generally perform better under favorable lighting and moderate vegetation, most SLAM systems perform poorly in low-light and high-vegetation scenarios, particularly during summer and autumn. Our analysis highlights the need for improved adaptability in visual SLAM algorithms for outdoor applications, as current systems struggle with dynamic environmental factors affecting scale, feature extraction, and trajectory consistency. This dataset provides a solid foundation for advancing visual SLAM research in real-world, natural environments, fostering the development of more resilient SLAM systems for long-term outdoor localization and mapping. The dataset and the code of the benchmark are available under https://iis-esslingen.github.io/rover.
Related papers
- NeRF and Gaussian Splatting SLAM in the Wild [9.516289996766059]
This study focuses on camera tracking accuracy, robustness to environmental factors, and computational efficiency, highlighting distinct trade-offs.
Neural SLAM methods achieve superior robustness, particularly under challenging conditions such as low light, but at a high computational cost.
Traditional methods perform the best across seasons but are highly sensitive to variations in lighting conditions.
arXiv Detail & Related papers (2024-12-04T12:11:19Z) - QueensCAMP: an RGB-D dataset for robust Visual SLAM [0.0]
We introduce a novel RGB-D dataset designed for evaluating the robustness of VSLAM systems.
The dataset comprises real-world indoor scenes with dynamic objects, motion blur, and varying illumination.
We offer open-source scripts for injecting camera failures into any images, enabling further customization.
arXiv Detail & Related papers (2024-10-16T12:58:08Z) - BVI-RLV: A Fully Registered Dataset and Benchmarks for Low-Light Video Enhancement [56.97766265018334]
This paper introduces a low-light video dataset, consisting of 40 scenes with various motion scenarios under two distinct low-lighting conditions.
We provide fully registered ground truth data captured in normal light using a programmable motorized dolly and refine it via an image-based approach for pixel-wise frame alignment across different light levels.
Our experimental results demonstrate the significance of fully registered video pairs for low-light video enhancement (LLVE) and the comprehensive evaluation shows that the models trained with our dataset outperform those trained with the existing datasets.
arXiv Detail & Related papers (2024-07-03T22:41:49Z) - LiveHPS: LiDAR-based Scene-level Human Pose and Shape Estimation in Free
Environment [59.320414108383055]
We present LiveHPS, a novel single-LiDAR-based approach for scene-level human pose and shape estimation.
We propose a huge human motion dataset, named FreeMotion, which is collected in various scenarios with diverse human poses.
arXiv Detail & Related papers (2024-02-27T03:08:44Z) - Implicit Event-RGBD Neural SLAM [54.74363487009845]
Implicit neural SLAM has achieved remarkable progress recently.
Existing methods face significant challenges in non-ideal scenarios.
We propose EN-SLAM, the first event-RGBD implicit neural SLAM framework.
arXiv Detail & Related papers (2023-11-18T08:48:58Z) - 4Seasons: Benchmarking Visual SLAM and Long-Term Localization for
Autonomous Driving in Challenging Conditions [54.59279160621111]
We present a novel visual SLAM and long-term localization benchmark for autonomous driving in challenging conditions based on the large-scale 4Seasons dataset.
The proposed benchmark provides drastic appearance variations caused by seasonal changes and diverse weather and illumination conditions.
We introduce a new unified benchmark for jointly evaluating visual odometry, global place recognition, and map-based visual localization performance.
arXiv Detail & Related papers (2022-12-31T13:52:36Z) - Dense RGB-D-Inertial SLAM with Map Deformations [25.03159756734727]
We propose the first tightly-coupled dense RGB-D-inertial SLAM system.
We show that our system is more robust to fast motions and periods of low texture and low geometric variation than a related RGB-D-only SLAM system.
arXiv Detail & Related papers (2022-07-22T08:33:38Z) - PLD-SLAM: A Real-Time Visual SLAM Using Points and Line Segments in
Dynamic Scenes [0.0]
This paper proposes a real-time stereo indirect visual SLAM system, PLD-SLAM, which combines point and line features.
We also present a novel global gray similarity (GGS) algorithm to achieve reasonable selection and efficient loop closure detection.
arXiv Detail & Related papers (2022-07-22T07:40:00Z) - Optical flow-based branch segmentation for complex orchard environments [73.11023209243326]
We train a neural network system in simulation only using simulated RGB data and optical flow.
This resulting neural network is able to perform foreground segmentation of branches in a busy orchard environment without additional real-world training or using any special setup or equipment beyond a standard camera.
Our results show that our system is highly accurate and, when compared to a network using manually labeled RGBD data, achieves significantly more consistent and robust performance across environments that differ from the training set.
arXiv Detail & Related papers (2022-02-26T03:38:20Z) - The Hilti SLAM Challenge Dataset [41.091844019181735]
Construction environments pose challenging problem to Simultaneous Localization and Mapping (SLAM) algorithms.
To help this research, we propose a new dataset, the Hilti SLAM Challenge dataset.
Each dataset includes accurate ground truth to allow direct testing of SLAM results.
arXiv Detail & Related papers (2021-09-23T12:02:40Z) - 4Seasons: A Cross-Season Dataset for Multi-Weather SLAM in Autonomous
Driving [48.588254700810474]
We present a novel dataset covering seasonal and challenging perceptual conditions for autonomous driving.
Among others, it enables research on visual odometry, global place recognition, and map-based re-localization tracking.
arXiv Detail & Related papers (2020-09-14T12:31:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.