The LuViRA Dataset: Synchronized Vision, Radio, and Audio Sensors for Indoor Localization
- URL: http://arxiv.org/abs/2302.05309v3
- Date: Fri, 26 Apr 2024 11:04:02 GMT
- Title: The LuViRA Dataset: Synchronized Vision, Radio, and Audio Sensors for Indoor Localization
- Authors: Ilayda Yaman, Guoda Tian, Martin Larsson, Patrik Persson, Michiel Sandra, Alexander Dürr, Erik Tegler, Nikhil Challa, Henrik Garde, Fredrik Tufvesson, Kalle Åström, Ove Edfors, Steffen Malkowsky, Liang Liu,
- Abstract summary: The dataset includes color images, corresponding depth maps, inertial measurement unit (IMU) readings, channel response between a 5G massive multiple-input and multiple-output (MIMO) testbed and user equipment.
We synchronize these sensors to ensure that all data is recorded simultaneously.
The main aim of this dataset is to enable research on sensor fusion with the most commonly used sensors for localization tasks.
- Score: 41.58739817444644
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a synchronized multisensory dataset for accurate and robust indoor localization: the Lund University Vision, Radio, and Audio (LuViRA) Dataset. The dataset includes color images, corresponding depth maps, inertial measurement unit (IMU) readings, channel response between a 5G massive multiple-input and multiple-output (MIMO) testbed and user equipment, audio recorded by 12 microphones, and accurate six degrees of freedom (6DOF) pose ground truth of 0.5 mm. We synchronize these sensors to ensure that all data is recorded simultaneously. A camera, speaker, and transmit antenna are placed on top of a slowly moving service robot, and 89 trajectories are recorded. Each trajectory includes 20 to 50 seconds of recorded sensor data and ground truth labels. Data from different sensors can be used separately or jointly to perform localization tasks, and data from the motion capture (mocap) system is used to verify the results obtained by the localization algorithms. The main aim of this dataset is to enable research on sensor fusion with the most commonly used sensors for localization tasks. Moreover, the full dataset or some parts of it can also be used for other research areas such as channel estimation, image classification, etc. Our dataset is available at: https://github.com/ilaydayaman/LuViRA_Dataset
Related papers
- MSSIDD: A Benchmark for Multi-Sensor Denoising [55.41612200877861]
We introduce a new benchmark, the Multi-Sensor SIDD dataset, which is the first raw-domain dataset designed to evaluate the sensor transferability of denoising models.
We propose a sensor consistency training framework that enables denoising models to learn the sensor-invariant features.
arXiv Detail & Related papers (2024-11-18T13:32:59Z) - DIDLM:A Comprehensive Multi-Sensor Dataset with Infrared Cameras, Depth Cameras, LiDAR, and 4D Millimeter-Wave Radar in Challenging Scenarios for 3D Mapping [7.050468075029598]
This study presents a comprehensive multi-sensor dataset designed for 3D mapping in challenging indoor and outdoor environments.
The dataset comprises data from infrared cameras, depth cameras, LiDAR, and 4D millimeter-wave radar.
Various SLAM algorithms are employed to process the dataset, revealing performance differences among algorithms in different scenarios.
arXiv Detail & Related papers (2024-04-15T09:49:33Z) - MTMMC: A Large-Scale Real-World Multi-Modal Camera Tracking Benchmark [63.878793340338035]
Multi-target multi-camera tracking is a crucial task that involves identifying and tracking individuals over time using video streams from multiple cameras.
Existing datasets for this task are either synthetically generated or artificially constructed within a controlled camera network setting.
We present MTMMC, a real-world, large-scale dataset that includes long video sequences captured by 16 multi-modal cameras in two different environments.
arXiv Detail & Related papers (2024-03-29T15:08:37Z) - GDTM: An Indoor Geospatial Tracking Dataset with Distributed Multimodal
Sensors [9.8714071146137]
GDTM is a nine-hour dataset for multimodal object tracking with distributed multimodal sensors and reconfigurable sensor node placements.
Our dataset enables the exploration of several research problems, such as optimizing architectures for processing multimodal data.
arXiv Detail & Related papers (2024-02-21T21:24:57Z) - LuViRA Dataset Validation and Discussion: Comparing Vision, Radio, and Audio Sensors for Indoor Localization [8.296768815428441]
We present a unique comparative analysis, and evaluation of vision, radio, and audio based localization algorithms.
We create the first baseline for the aforementioned sensors using the recently published Lund University Vision, Radio, and Audio (LuViRA) dataset.
Some of the challenges of using each specific sensor for indoor localization tasks are highlighted.
arXiv Detail & Related papers (2023-09-06T12:57:00Z) - UnLoc: A Universal Localization Method for Autonomous Vehicles using
LiDAR, Radar and/or Camera Input [51.150605800173366]
UnLoc is a novel unified neural modeling approach for localization with multi-sensor input in all weather conditions.
Our method is extensively evaluated on Oxford Radar RobotCar, ApolloSouthBay and Perth-WA datasets.
arXiv Detail & Related papers (2023-07-03T04:10:55Z) - Multimodal Dataset from Harsh Sub-Terranean Environment with Aerosol
Particles for Frontier Exploration [55.41644538483948]
This paper introduces a multimodal dataset from the harsh and unstructured underground environment with aerosol particles.
It contains synchronized raw data measurements from all onboard sensors in Robot Operating System (ROS) format.
The focus of this paper is not only to capture both temporal and spatial data diversities but also to present the impact of harsh conditions on captured data.
arXiv Detail & Related papers (2023-04-27T20:21:18Z) - TUM-VIE: The TUM Stereo Visual-Inertial Event Dataset [50.8779574716494]
Event cameras are bio-inspired vision sensors which measure per pixel brightness changes.
They offer numerous benefits over traditional, frame-based cameras, including low latency, high dynamic range, high temporal resolution and low power consumption.
To foster the development of 3D perception and navigation algorithms with event cameras, we present the TUM-VIE dataset.
arXiv Detail & Related papers (2021-08-16T19:53:56Z) - A Multi-spectral Dataset for Evaluating Motion Estimation Systems [7.953825491774407]
This paper presents a novel dataset for evaluating the performance of multi-spectral motion estimation systems.
All the sequences are recorded from a handheld multi-spectral device.
The depth images are captured by a Microsoft Kinect2 and can have benefits for learning cross-modalities stereo matching.
arXiv Detail & Related papers (2020-07-01T17:11:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.