QueensCAMP: an RGB-D dataset for robust Visual SLAM
- URL: http://arxiv.org/abs/2410.12520v1
- Date: Wed, 16 Oct 2024 12:58:08 GMT
- Title: QueensCAMP: an RGB-D dataset for robust Visual SLAM
- Authors: Hudson M. S. Bruno, Esther L. Colombini, Sidney N. Givigi Jr,
- Abstract summary: We introduce a novel RGB-D dataset designed for evaluating the robustness of VSLAM systems.
The dataset comprises real-world indoor scenes with dynamic objects, motion blur, and varying illumination.
We offer open-source scripts for injecting camera failures into any images, enabling further customization.
- Score: 0.0
- License:
- Abstract: Visual Simultaneous Localization and Mapping (VSLAM) is a fundamental technology for robotics applications. While VSLAM research has achieved significant advancements, its robustness under challenging situations, such as poor lighting, dynamic environments, motion blur, and sensor failures, remains a challenging issue. To address these challenges, we introduce a novel RGB-D dataset designed for evaluating the robustness of VSLAM systems. The dataset comprises real-world indoor scenes with dynamic objects, motion blur, and varying illumination, as well as emulated camera failures, including lens dirt, condensation, underexposure, and overexposure. Additionally, we offer open-source scripts for injecting camera failures into any images, enabling further customization by the research community. Our experiments demonstrate that ORB-SLAM2, a traditional VSLAM algorithm, and TartanVO, a Deep Learning-based VO algorithm, can experience performance degradation under these challenging conditions. Therefore, this dataset and the camera failure open-source tools provide a valuable resource for developing more robust VSLAM systems capable of handling real-world challenges.
Related papers
- HUE Dataset: High-Resolution Event and Frame Sequences for Low-Light Vision [16.432164340779266]
We introduce the HUE dataset, a collection of high-resolution event and frame sequences captured in low-light conditions.
Our dataset includes 106 sequences, encompassing indoor, cityscape, twilight, night, driving, and controlled scenarios.
We employ both qualitative and quantitative evaluations to assess state-of-the-art low-light enhancement and event-based image reconstruction methods.
arXiv Detail & Related papers (2024-10-24T21:15:15Z) - Event-based Sensor Fusion and Application on Odometry: A Survey [2.3717744547851627]
Event cameras offer advantages in environments characterized by high-speed motion, low lighting, or wide dynamic range.
These properties render event cameras particularly effective for sensor fusion in robotics and computer vision.
arXiv Detail & Related papers (2024-10-20T19:32:25Z) - A Comprehensive Study of Object Tracking in Low-Light Environments [3.508168174653255]
This paper examines the impact of noise, color imbalance, and low contrast on automatic object trackers.
We propose a solution to enhance tracking performance by integrating denoising and low-light enhancement methods.
Experimental results show that the proposed tracker, trained with low-light synthetic datasets, outperforms both the vanilla MixFormer and Siam R-CNN.
arXiv Detail & Related papers (2023-12-25T17:20:57Z) - Implicit Event-RGBD Neural SLAM [54.74363487009845]
Implicit neural SLAM has achieved remarkable progress recently.
Existing methods face significant challenges in non-ideal scenarios.
We propose EN-SLAM, the first event-RGBD implicit neural SLAM framework.
arXiv Detail & Related papers (2023-11-18T08:48:58Z) - Improving Lens Flare Removal with General Purpose Pipeline and Multiple
Light Sources Recovery [69.71080926778413]
flare artifacts can affect image visual quality and downstream computer vision tasks.
Current methods do not consider automatic exposure and tone mapping in image signal processing pipeline.
We propose a solution to improve the performance of lens flare removal by revisiting the ISP and design a more reliable light sources recovery strategy.
arXiv Detail & Related papers (2023-08-31T04:58:17Z) - Event-based Simultaneous Localization and Mapping: A Comprehensive Survey [52.73728442921428]
Review of event-based vSLAM algorithms that exploit the benefits of asynchronous and irregular event streams for localization and mapping tasks.
Paper categorizes event-based vSLAM methods into four main categories: feature-based, direct, motion-compensation, and deep learning methods.
arXiv Detail & Related papers (2023-04-19T16:21:14Z) - Seeing Through The Noisy Dark: Toward Real-world Low-Light Image
Enhancement and Denoising [125.56062454927755]
Real-world low-light environment usually suffer from lower visibility and heavier noise, due to insufficient light or hardware limitation.
We propose a novel end-to-end method termed Real-world Low-light Enhancement & Denoising Network (RLED-Net)
arXiv Detail & Related papers (2022-10-02T14:57:23Z) - Multi Visual Modality Fall Detection Dataset [4.00152916049695]
Falls are one of the leading cause of injury-related deaths among the elderly worldwide.
Effective detection of falls can reduce the risk of complications and injuries.
Video cameras provide a passive alternative; however, regular RGB cameras are impacted by changing lighting conditions and privacy concerns.
arXiv Detail & Related papers (2022-06-25T21:54:26Z) - Learning with Nested Scene Modeling and Cooperative Architecture Search
for Low-Light Vision [95.45256938467237]
Images captured from low-light scenes often suffer from severe degradations.
Deep learning methods have been proposed to enhance the visual quality of low-light images.
It is still challenging to extend these enhancement techniques to handle other Low-Light Vision applications.
arXiv Detail & Related papers (2021-12-09T06:08:31Z) - Does Thermal data make the detection systems more reliable? [1.2891210250935146]
We propose a comprehensive detection system based on a multimodal-collaborative framework.
This framework learns from both RGB (from visual cameras) and thermal (from Infrared cameras) data.
Our empirical results show that while the improvement in accuracy is nominal, the value lies in challenging and extremely difficult edge cases.
arXiv Detail & Related papers (2021-11-09T15:04:34Z) - Object-based Illumination Estimation with Rendering-aware Neural
Networks [56.01734918693844]
We present a scheme for fast environment light estimation from the RGBD appearance of individual objects and their local image areas.
With the estimated lighting, virtual objects can be rendered in AR scenarios with shading that is consistent to the real scene.
arXiv Detail & Related papers (2020-08-06T08:23:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.