The Hilti SLAM Challenge Dataset
- URL: http://arxiv.org/abs/2109.11316v1
- Date: Thu, 23 Sep 2021 12:02:40 GMT
- Title: The Hilti SLAM Challenge Dataset
- Authors: Michael Helmberger, Kristian Morin, Nitish Kumar, Danwei Wang, Yufeng
Yue, Giovanni Cioffi, Davide Scaramuzza
- Abstract summary: Construction environments pose challenging problem to Simultaneous Localization and Mapping (SLAM) algorithms.
To help this research, we propose a new dataset, the Hilti SLAM Challenge dataset.
Each dataset includes accurate ground truth to allow direct testing of SLAM results.
- Score: 41.091844019181735
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Accurate and robust pose estimation is a fundamental capability for
autonomous systems to navigate, map and perform tasks. Particularly,
construction environments pose challenging problem to Simultaneous Localization
and Mapping (SLAM) algorithms due to sparsity, varying illumination conditions,
and dynamic objects. Current academic research in SLAM is focused on developing
more accurate and robust algorithms for example by fusing different sensor
modalities. To help this research, we propose a new dataset, the Hilti SLAM
Challenge Dataset. The sensor platform used to collect this dataset contains a
number of visual, lidar and inertial sensors which have all been rigorously
calibrated. All data is temporally aligned to support precise multi-sensor
fusion. Each dataset includes accurate ground truth to allow direct testing of
SLAM results. Raw data as well as intrinsic and extrinsic sensor calibration
data from twelve datasets in various environments is provided. Each environment
represents common scenarios found in building construction sites in various
stages of completion.
Related papers
- DIDLM:A Comprehensive Multi-Sensor Dataset with Infrared Cameras, Depth Cameras, LiDAR, and 4D Millimeter-Wave Radar in Challenging Scenarios for 3D Mapping [7.050468075029598]
This study presents a comprehensive multi-sensor dataset designed for 3D mapping in challenging indoor and outdoor environments.
The dataset comprises data from infrared cameras, depth cameras, LiDAR, and 4D millimeter-wave radar.
Various SLAM algorithms are employed to process the dataset, revealing performance differences among algorithms in different scenarios.
arXiv Detail & Related papers (2024-04-15T09:49:33Z) - GDTM: An Indoor Geospatial Tracking Dataset with Distributed Multimodal
Sensors [9.8714071146137]
GDTM is a nine-hour dataset for multimodal object tracking with distributed multimodal sensors and reconfigurable sensor node placements.
Our dataset enables the exploration of several research problems, such as optimizing architectures for processing multimodal data.
arXiv Detail & Related papers (2024-02-21T21:24:57Z) - Multimodal Dataset from Harsh Sub-Terranean Environment with Aerosol
Particles for Frontier Exploration [55.41644538483948]
This paper introduces a multimodal dataset from the harsh and unstructured underground environment with aerosol particles.
It contains synchronized raw data measurements from all onboard sensors in Robot Operating System (ROS) format.
The focus of this paper is not only to capture both temporal and spatial data diversities but also to present the impact of harsh conditions on captured data.
arXiv Detail & Related papers (2023-04-27T20:21:18Z) - On the Importance of Accurate Geometry Data for Dense 3D Vision Tasks [61.74608497496841]
Training on inaccurate or corrupt data induces model bias and hampers generalisation capabilities.
This paper investigates the effect of sensor errors for the dense 3D vision tasks of depth estimation and reconstruction.
arXiv Detail & Related papers (2023-03-26T22:32:44Z) - IDD-3D: Indian Driving Dataset for 3D Unstructured Road Scenes [79.18349050238413]
Preparation and training of deploy-able deep learning architectures require the models to be suited to different traffic scenarios.
An unstructured and complex driving layout found in several developing countries such as India poses a challenge to these models.
We build a new dataset, IDD-3D, which consists of multi-modal data from multiple cameras and LiDAR sensors with 12k annotated driving LiDAR frames.
arXiv Detail & Related papers (2022-10-23T23:03:17Z) - TRoVE: Transforming Road Scene Datasets into Photorealistic Virtual
Environments [84.6017003787244]
This work proposes a synthetic data generation pipeline to address the difficulties and domain-gaps present in simulated datasets.
We show that using annotations and visual cues from existing datasets, we can facilitate automated multi-modal data generation.
arXiv Detail & Related papers (2022-08-16T20:46:08Z) - MetaGraspNet: A Large-Scale Benchmark Dataset for Scene-Aware
Ambidextrous Bin Picking via Physics-based Metaverse Synthesis [72.85526892440251]
We introduce MetaGraspNet, a large-scale photo-realistic bin picking dataset constructed via physics-based metaverse synthesis.
The proposed dataset contains 217k RGBD images across 82 different article types, with full annotations for object detection, amodal perception, keypoint detection, manipulation order and ambidextrous grasp labels for a parallel-jaw and vacuum gripper.
We also provide a real dataset consisting of over 2.3k fully annotated high-quality RGBD images, divided into 5 levels of difficulties and an unseen object set to evaluate different object and layout properties.
arXiv Detail & Related papers (2022-08-08T08:15:34Z) - Learning to Detect Fortified Areas [0.0]
We consider the problem of classifying which areas of a given surface are fortified by for instance, roads, sidewalks, parking spaces, paved driveways and terraces.
We propose an algorithmic solution by designing a neural net embedding architecture that transforms data from all the different sensor systems into a new common representation.
arXiv Detail & Related papers (2021-05-26T08:03:42Z) - SpaceNet 6: Multi-Sensor All Weather Mapping Dataset [13.715388432549373]
We present an open Multi-Sensor All Weather Mapping (MSAW) dataset and challenge.
MSAW covers 120 km2 over multiple overlapping collects and is annotated with over 48,000 unique building footprints labels.
We present a baseline and benchmark for building footprint extraction with SAR data and find that state-of-the-art segmentation models pre-trained on optical data, and then trained on SAR.
arXiv Detail & Related papers (2020-04-14T13:43:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.