M2DGR: A Multi-sensor and Multi-scenario SLAM Dataset for Ground Robots
- URL: http://arxiv.org/abs/2112.13659v1
- Date: Sun, 19 Dec 2021 12:37:09 GMT
- Title: M2DGR: A Multi-sensor and Multi-scenario SLAM Dataset for Ground Robots
- Authors: Jie Yin, Ang Li, Tao Li, Wenxian Yu, and Danping Zou
- Abstract summary: We introduce M2DGR: a novel large-scale dataset collected by a ground robot with a full sensorsuite.
The dataset comprises 36 sequences captured in diverse scenarios including both indoor and outdoor environments.
For the benefit of the research community, we make the dataset and tools public.
- Score: 22.767094281397746
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce M2DGR: a novel large-scale dataset collected by a ground robot
with a full sensor-suite including six fish-eye and one sky-pointing RGB
cameras, an infrared camera, an event camera, a Visual-Inertial Sensor
(VI-sensor), an inertial measurement unit (IMU), a LiDAR, a consumer-grade
Global Navigation Satellite System (GNSS) receiver and a GNSS-IMU navigation
system with real-time kinematic (RTK) signals. All those sensors were
well-calibrated and synchronized, and their data were recorded simultaneously.
The ground truth trajectories were obtained by the motion capture device, a
laser 3D tracker, and an RTK receiver. The dataset comprises 36 sequences
(about 1TB) captured in diverse scenarios including both indoor and outdoor
environments. We evaluate state-of-the-art SLAM algorithms on M2DGR. Results
show that existing solutions perform poorly in some scenarios. For the benefit
of the research community, we make the dataset and tools public. The webpage of
our project is https://github.com/SJTU-ViSYS/M2DGR.
Related papers
- ES-PTAM: Event-based Stereo Parallel Tracking and Mapping [11.801511288805225]
Event cameras offer advantages to overcome the limitations of standard cameras.
We propose a novel event-based stereo VO system by combining two ideas.
We evaluate the system on five real-world datasets.
arXiv Detail & Related papers (2024-08-28T07:56:28Z) - M3LEO: A Multi-Modal, Multi-Label Earth Observation Dataset Integrating Interferometric SAR and Multispectral Data [1.4053129774629076]
M3LEO is a multi-modal, multi-label Earth observation dataset.
It spans approximately 17M 4x4 km data chips from six diverse geographic regions.
arXiv Detail & Related papers (2024-06-06T16:30:41Z) - VBR: A Vision Benchmark in Rome [1.71787484850503]
This paper presents a vision and perception research dataset collected in Rome, featuring RGB data, 3D point clouds, IMU, and GPS data.
We introduce a new benchmark targeting visual odometry and SLAM, to advance the research in autonomous robotics and computer vision.
arXiv Detail & Related papers (2024-04-17T12:34:49Z) - USTC FLICAR: A Sensors Fusion Dataset of LiDAR-Inertial-Camera for
Heavy-duty Autonomous Aerial Work Robots [13.089952067224138]
We present the USTC FLICAR dataset, which is dedicated to the development of simultaneous localization and mapping.
The proposed dataset extends the typical autonomous driving sensing suite to aerial scenes.
Based on the Segment Anything Model (SAM), we produce the Semantic FLICAR dataset, which provides fine-grained semantic segmentation annotations.
arXiv Detail & Related papers (2023-04-04T17:45:06Z) - MIPI 2022 Challenge on RGB+ToF Depth Completion: Dataset and Report [92.61915017739895]
This paper introduces the first MIPI challenge including five tracks focusing on novel image sensors and imaging algorithms.
The participants were provided with a new dataset called TetrasRGBD, which contains 18k pairs of high-quality synthetic RGB+Depth training data and 2.3k pairs of testing data from mixed sources.
The final results are evaluated using objective metrics and Mean Opinion Score (MOS) subjectively.
arXiv Detail & Related papers (2022-09-15T05:31:53Z) - TUM-VIE: The TUM Stereo Visual-Inertial Event Dataset [50.8779574716494]
Event cameras are bio-inspired vision sensors which measure per pixel brightness changes.
They offer numerous benefits over traditional, frame-based cameras, including low latency, high dynamic range, high temporal resolution and low power consumption.
To foster the development of 3D perception and navigation algorithms with event cameras, we present the TUM-VIE dataset.
arXiv Detail & Related papers (2021-08-16T19:53:56Z) - Kimera-Multi: Robust, Distributed, Dense Metric-Semantic SLAM for
Multi-Robot Systems [92.26462290867963]
Kimera-Multi is the first multi-robot system that is robust and capable of identifying and rejecting incorrect inter and intra-robot loop closures.
We demonstrate Kimera-Multi in photo-realistic simulations, SLAM benchmarking datasets, and challenging outdoor datasets collected using ground robots.
arXiv Detail & Related papers (2021-06-28T03:56:40Z) - Domain and Modality Gaps for LiDAR-based Person Detection on Mobile
Robots [91.01747068273666]
This paper studies existing LiDAR-based person detectors with a particular focus on mobile robot scenarios.
Experiments revolve around the domain gap between driving and mobile robot scenarios, as well as the modality gap between 3D and 2D LiDAR sensors.
Results provide practical insights into LiDAR-based person detection and facilitate informed decisions for relevant mobile robot designs and applications.
arXiv Detail & Related papers (2021-06-21T16:35:49Z) - EagerMOT: 3D Multi-Object Tracking via Sensor Fusion [68.8204255655161]
Multi-object tracking (MOT) enables mobile robots to perform well-informed motion planning and navigation by localizing surrounding objects in 3D space and time.
Existing methods rely on depth sensors (e.g., LiDAR) to detect and track targets in 3D space, but only up to a limited sensing range due to the sparsity of the signal.
We propose EagerMOT, a simple tracking formulation that integrates all available object observations from both sensor modalities to obtain a well-informed interpretation of the scene dynamics.
arXiv Detail & Related papers (2021-04-29T22:30:29Z) - DMD: A Large-Scale Multi-Modal Driver Monitoring Dataset for Attention
and Alertness Analysis [54.198237164152786]
Vision is the richest and most cost-effective technology for Driver Monitoring Systems (DMS)
The lack of sufficiently large and comprehensive datasets is currently a bottleneck for the progress of DMS development.
In this paper, we introduce the Driver Monitoring dataset (DMD), an extensive dataset which includes real and simulated driving scenarios.
arXiv Detail & Related papers (2020-08-27T12:33:54Z) - JRMOT: A Real-Time 3D Multi-Object Tracker and a New Large-Scale Dataset [34.609125601292]
We present JRMOT, a novel 3D MOT system that integrates information from RGB images and 3D point clouds to achieve real-time tracking performance.
As part of our work, we release the JRDB dataset, a novel large scale 2D+3D dataset and benchmark.
The presented 3D MOT system demonstrates state-of-the-art performance against competing methods on the popular 2D tracking KITTI benchmark.
arXiv Detail & Related papers (2020-02-19T19:21:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.