MineInsight: A Multi-sensor Dataset for Humanitarian Demining Robotics in Off-Road Environments
- URL: http://arxiv.org/abs/2506.04842v1
- Date: Thu, 05 Jun 2025 10:08:24 GMT
- Title: MineInsight: A Multi-sensor Dataset for Humanitarian Demining Robotics in Off-Road Environments
- Authors: Mario Malizia, Charles Hamesse, Ken Hasselmann, Geert De Cubber, Nikolaos Tsiogkas, Eric Demeester, Rob Haelterman,
- Abstract summary: We introduce MineInsight, a publicly available multi-sensor, multi-spectral dataset for landmine detection.<n>The dataset features 35 different targets distributed along three distinct tracks, providing a diverse and realistic testing environment.<n>MineInsight serves as a benchmark for developing and evaluating landmine detection algorithms.
- Score: 0.5339846068056558
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The use of robotics in humanitarian demining increasingly involves computer vision techniques to improve landmine detection capabilities. However, in the absence of diverse and realistic datasets, the reliable validation of algorithms remains a challenge for the research community. In this paper, we introduce MineInsight, a publicly available multi-sensor, multi-spectral dataset designed for off-road landmine detection. The dataset features 35 different targets (15 landmines and 20 commonly found objects) distributed along three distinct tracks, providing a diverse and realistic testing environment. MineInsight is, to the best of our knowledge, the first dataset to integrate dual-view sensor scans from both an Unmanned Ground Vehicle and its robotic arm, offering multiple viewpoints to mitigate occlusions and improve spatial awareness. It features two LiDARs, as well as images captured at diverse spectral ranges, including visible (RGB, monochrome), visible short-wave infrared (VIS-SWIR), and long-wave infrared (LWIR). Additionally, the dataset comes with an estimation of the location of the targets, offering a benchmark for evaluating detection algorithms. We recorded approximately one hour of data in both daylight and nighttime conditions, resulting in around 38,000 RGB frames, 53,000 VIS-SWIR frames, and 108,000 LWIR frames. MineInsight serves as a benchmark for developing and evaluating landmine detection algorithms. Our dataset is available at https://github.com/mariomlz99/MineInsight.
Related papers
- OmniUnet: A Multimodal Network for Unstructured Terrain Segmentation on Planetary Rovers Using RGB, Depth, and Thermal Imagery [0.5837061763460748]
This work presents OmniUnet, a transformer-based neural network architecture for semantic segmentation using RGB, depth, and thermal imagery.<n>A custom multimodal sensor housing was developed using 3D printing and mounted on the Martian Rover Testbed for Autonomy.<n>A subset of this dataset was manually labeled to support supervised training of the network.<n>Inference tests yielded an average prediction time of 673 ms on a resource-constrained computer.
arXiv Detail & Related papers (2025-08-01T12:23:29Z) - Multistream Network for LiDAR and Camera-based 3D Object Detection in Outdoor Scenes [59.78696921486972]
Fusion of LiDAR and RGB data has the potential to enhance outdoor 3D object detection accuracy.<n>We propose a MultiStream Detection (MuStD) network, that meticulously extracts task-relevant information from both data modalities.
arXiv Detail & Related papers (2025-07-25T14:20:16Z) - HoloMine: A Synthetic Dataset for Buried Landmines Recognition using Microwave Holographic Imaging [6.431432627253589]
In this paper, we propose a novel synthetic dataset for buried landmine detection.<n>The dataset consists of 41,800 microwave holographic images (2D) and their holographic inverted scans (3D) of different types of buried objects.<n>We evaluate the performance of several state-of-the-art deep learning models trained on our synthetic dataset for various classification tasks.
arXiv Detail & Related papers (2025-02-28T13:53:35Z) - RAD: A Dataset and Benchmark for Real-Life Anomaly Detection with Robotic Observations [18.23500204496233]
The Realistic Anomaly Detection dataset (RAD) is the first multi-view RGB-based anomaly detection dataset specifically collected using a real robot arm.
RAD comprises 4765 images across 13 categories and 4 defect types, collected from more than 50 viewpoints.
We propose a data augmentation method to improve the accuracy of pose estimation and facilitate the reconstruction of 3D point clouds.
arXiv Detail & Related papers (2024-10-01T14:05:35Z) - RoboSense: Large-scale Dataset and Benchmark for Egocentric Robot Perception and Navigation in Crowded and Unstructured Environments [62.5830455357187]
We setup an egocentric multi-sensor data collection platform based on 3 main types of sensors (Camera, LiDAR and Fisheye)<n>A large-scale multimodal dataset is constructed, named RoboSense, to facilitate egocentric robot perception.
arXiv Detail & Related papers (2024-08-28T03:17:40Z) - M3LEO: A Multi-Modal, Multi-Label Earth Observation Dataset Integrating Interferometric SAR and Multispectral Data [1.4053129774629076]
M3LEO is a multi-modal, multi-label Earth observation dataset.
It spans approximately 17M 4x4 km data chips from six diverse geographic regions.
arXiv Detail & Related papers (2024-06-06T16:30:41Z) - DIDLM: A SLAM Dataset for Difficult Scenarios Featuring Infrared, Depth Cameras, LIDAR, 4D Radar, and Others under Adverse Weather, Low Light Conditions, and Rough Roads [20.600516423425688]
We introduce a multi-sensor dataset covering challenging scenarios such as snowy weather, rainy weather, nighttime conditions, speed bumps, and rough terrains.<n>The dataset includes rarely utilized sensors for extreme conditions, such as 4D millimeter-wave radar, infrared cameras, and depth cameras, alongside 3D LiDAR, RGB cameras, GPS, and IMU.<n>It supports both autonomous driving and ground robot applications and provides reliable GPS/INS ground truth data, covering structured and semi-structured terrains.
arXiv Detail & Related papers (2024-04-15T09:49:33Z) - OSDaR23: Open Sensor Data for Rail 2023 [0.0]
OSDaR23 is a multi-sensor dataset of 45 subsequences acquired in Hamburg, Germany, in September 2021.
The dataset contains 204091 polyline, polygonal, rectangle, and cuboid annotations in total for 20 different object classes.
It is the first publicly available multi-sensor annotated dataset with a variety of object classes relevant for the railway context.
arXiv Detail & Related papers (2023-05-04T17:19:47Z) - Multimodal Dataset from Harsh Sub-Terranean Environment with Aerosol
Particles for Frontier Exploration [55.41644538483948]
This paper introduces a multimodal dataset from the harsh and unstructured underground environment with aerosol particles.
It contains synchronized raw data measurements from all onboard sensors in Robot Operating System (ROS) format.
The focus of this paper is not only to capture both temporal and spatial data diversities but also to present the impact of harsh conditions on captured data.
arXiv Detail & Related papers (2023-04-27T20:21:18Z) - The LuViRA Dataset: Synchronized Vision, Radio, and Audio Sensors for Indoor Localization [41.58739817444644]
The dataset includes color images, corresponding depth maps, inertial measurement unit (IMU) readings, channel response between a 5G massive multiple-input and multiple-output (MIMO) testbed and user equipment.
We synchronize these sensors to ensure that all data is recorded simultaneously.
The main aim of this dataset is to enable research on sensor fusion with the most commonly used sensors for localization tasks.
arXiv Detail & Related papers (2023-02-10T15:12:40Z) - Towards Multimodal Multitask Scene Understanding Models for Indoor
Mobile Agents [49.904531485843464]
In this paper, we discuss the main challenge: insufficient, or even no, labeled data for real-world indoor environments.
We describe MMISM (Multi-modality input Multi-task output Indoor Scene understanding Model) to tackle the above challenges.
MMISM considers RGB images as well as sparse Lidar points as inputs and 3D object detection, depth completion, human pose estimation, and semantic segmentation as output tasks.
We show that MMISM performs on par or even better than single-task models.
arXiv Detail & Related papers (2022-09-27T04:49:19Z) - Fully Sparse 3D Object Detection [57.05834683261658]
We build a fully sparse 3D object detector (FSD) for long-range LiDAR-based object detection.
FSD is built upon the general sparse voxel encoder and a novel sparse instance recognition (SIR) module.
SIR avoids the time-consuming neighbor queries in previous point-based methods by grouping points into instances.
arXiv Detail & Related papers (2022-07-20T17:01:33Z) - M2DGR: A Multi-sensor and Multi-scenario SLAM Dataset for Ground Robots [22.767094281397746]
We introduce M2DGR: a novel large-scale dataset collected by a ground robot with a full sensorsuite.
The dataset comprises 36 sequences captured in diverse scenarios including both indoor and outdoor environments.
For the benefit of the research community, we make the dataset and tools public.
arXiv Detail & Related papers (2021-12-19T12:37:09Z) - Domain and Modality Gaps for LiDAR-based Person Detection on Mobile
Robots [91.01747068273666]
This paper studies existing LiDAR-based person detectors with a particular focus on mobile robot scenarios.
Experiments revolve around the domain gap between driving and mobile robot scenarios, as well as the modality gap between 3D and 2D LiDAR sensors.
Results provide practical insights into LiDAR-based person detection and facilitate informed decisions for relevant mobile robot designs and applications.
arXiv Detail & Related papers (2021-06-21T16:35:49Z) - Drone-based RGB-Infrared Cross-Modality Vehicle Detection via
Uncertainty-Aware Learning [59.19469551774703]
Drone-based vehicle detection aims at finding the vehicle locations and categories in an aerial image.
We construct a large-scale drone-based RGB-Infrared vehicle detection dataset, termed DroneVehicle.
Our DroneVehicle collects 28, 439 RGB-Infrared image pairs, covering urban roads, residential areas, parking lots, and other scenarios from day to night.
arXiv Detail & Related papers (2020-03-05T05:29:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.