QueensCAMP: an RGB-D dataset for robust Visual SLAM
- URL: http://arxiv.org/abs/2410.12520v1
- Date: Wed, 16 Oct 2024 12:58:08 GMT
- Title: QueensCAMP: an RGB-D dataset for robust Visual SLAM
- Authors: Hudson M. S. Bruno, Esther L. Colombini, Sidney N. Givigi Jr,
- Abstract summary: We introduce a novel RGB-D dataset designed for evaluating the robustness of VSLAM systems.
The dataset comprises real-world indoor scenes with dynamic objects, motion blur, and varying illumination.
We offer open-source scripts for injecting camera failures into any images, enabling further customization.
- Score: 0.0
- License:
- Abstract: Visual Simultaneous Localization and Mapping (VSLAM) is a fundamental technology for robotics applications. While VSLAM research has achieved significant advancements, its robustness under challenging situations, such as poor lighting, dynamic environments, motion blur, and sensor failures, remains a challenging issue. To address these challenges, we introduce a novel RGB-D dataset designed for evaluating the robustness of VSLAM systems. The dataset comprises real-world indoor scenes with dynamic objects, motion blur, and varying illumination, as well as emulated camera failures, including lens dirt, condensation, underexposure, and overexposure. Additionally, we offer open-source scripts for injecting camera failures into any images, enabling further customization by the research community. Our experiments demonstrate that ORB-SLAM2, a traditional VSLAM algorithm, and TartanVO, a Deep Learning-based VO algorithm, can experience performance degradation under these challenging conditions. Therefore, this dataset and the camera failure open-source tools provide a valuable resource for developing more robust VSLAM systems capable of handling real-world challenges.
Related papers
- Rethinking High-speed Image Reconstruction Framework with Spike Camera [48.627095354244204]
Spike cameras generate continuous spike streams to capture high-speed scenes with lower bandwidth and higher dynamic range than traditional RGB cameras.
We introduce a novel spike-to-image reconstruction framework SpikeCLIP that goes beyond traditional training paradigms.
Our experiments on real-world low-light datasets demonstrate that SpikeCLIP significantly enhances texture details and the luminance balance of recovered images.
arXiv Detail & Related papers (2025-01-08T13:00:17Z) - ROVER: A Multi-Season Dataset for Visual SLAM [8.711135744156564]
ROVER is a benchmark dataset tailored for evaluating visual SLAM algorithms under diverse environmental conditions.
It covers 39 recordings across five outdoor locations, collected through all seasons and various lighting scenarios.
Results demonstrate that while stereo-inertial and RGB-D configurations generally perform better under favorable lighting and moderate vegetation, most SLAM systems perform poorly in low-light and high-vegetation scenarios.
arXiv Detail & Related papers (2024-12-03T15:34:00Z) - HUE Dataset: High-Resolution Event and Frame Sequences for Low-Light Vision [16.432164340779266]
We introduce the HUE dataset, a collection of high-resolution event and frame sequences captured in low-light conditions.
Our dataset includes 106 sequences, encompassing indoor, cityscape, twilight, night, driving, and controlled scenarios.
We employ both qualitative and quantitative evaluations to assess state-of-the-art low-light enhancement and event-based image reconstruction methods.
arXiv Detail & Related papers (2024-10-24T21:15:15Z) - Implicit Event-RGBD Neural SLAM [54.74363487009845]
Implicit neural SLAM has achieved remarkable progress recently.
Existing methods face significant challenges in non-ideal scenarios.
We propose EN-SLAM, the first event-RGBD implicit neural SLAM framework.
arXiv Detail & Related papers (2023-11-18T08:48:58Z) - Event-based Simultaneous Localization and Mapping: A Comprehensive Survey [52.73728442921428]
Review of event-based vSLAM algorithms that exploit the benefits of asynchronous and irregular event streams for localization and mapping tasks.
Paper categorizes event-based vSLAM methods into four main categories: feature-based, direct, motion-compensation, and deep learning methods.
arXiv Detail & Related papers (2023-04-19T16:21:14Z) - Seeing Through The Noisy Dark: Toward Real-world Low-Light Image
Enhancement and Denoising [125.56062454927755]
Real-world low-light environment usually suffer from lower visibility and heavier noise, due to insufficient light or hardware limitation.
We propose a novel end-to-end method termed Real-world Low-light Enhancement & Denoising Network (RLED-Net)
arXiv Detail & Related papers (2022-10-02T14:57:23Z) - Multi Visual Modality Fall Detection Dataset [4.00152916049695]
Falls are one of the leading cause of injury-related deaths among the elderly worldwide.
Effective detection of falls can reduce the risk of complications and injuries.
Video cameras provide a passive alternative; however, regular RGB cameras are impacted by changing lighting conditions and privacy concerns.
arXiv Detail & Related papers (2022-06-25T21:54:26Z) - Learning with Nested Scene Modeling and Cooperative Architecture Search
for Low-Light Vision [95.45256938467237]
Images captured from low-light scenes often suffer from severe degradations.
Deep learning methods have been proposed to enhance the visual quality of low-light images.
It is still challenging to extend these enhancement techniques to handle other Low-Light Vision applications.
arXiv Detail & Related papers (2021-12-09T06:08:31Z) - Does Thermal data make the detection systems more reliable? [1.2891210250935146]
We propose a comprehensive detection system based on a multimodal-collaborative framework.
This framework learns from both RGB (from visual cameras) and thermal (from Infrared cameras) data.
Our empirical results show that while the improvement in accuracy is nominal, the value lies in challenging and extremely difficult edge cases.
arXiv Detail & Related papers (2021-11-09T15:04:34Z) - Object-based Illumination Estimation with Rendering-aware Neural
Networks [56.01734918693844]
We present a scheme for fast environment light estimation from the RGBD appearance of individual objects and their local image areas.
With the estimated lighting, virtual objects can be rendered in AR scenarios with shading that is consistent to the real scene.
arXiv Detail & Related papers (2020-08-06T08:23:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.