DeepSense-V2V: A Vehicle-to-Vehicle Multi-Modal Sensing, Localization, and Communications Dataset
- URL: http://arxiv.org/abs/2406.17908v1
- Date: Tue, 25 Jun 2024 19:43:49 GMT
- Title: DeepSense-V2V: A Vehicle-to-Vehicle Multi-Modal Sensing, Localization, and Communications Dataset
- Authors: Joao Morais, Gouranga Charan, Nikhil Srinivas, Ahmed Alkhateeb,
- Abstract summary: This work presents the first large-scale multi-modal dataset for studying mmWave vehicle-to-vehicle communications.
The dataset contains vehicles driving during the day and night for 120 km in intercity and rural settings, with speeds up to 100 km per hour.
More than one million objects were detected across all images, from trucks to bicycles.
- Score: 12.007501768974281
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: High data rate and low-latency vehicle-to-vehicle (V2V) communication are essential for future intelligent transport systems to enable coordination, enhance safety, and support distributed computing and intelligence requirements. Developing effective communication strategies, however, demands realistic test scenarios and datasets. This is important at the high-frequency bands where more spectrum is available, yet harvesting this bandwidth is challenged by the need for direction transmission and the sensitivity of signal propagation to blockages. This work presents the first large-scale multi-modal dataset for studying mmWave vehicle-to-vehicle communications. It presents a two-vehicle testbed that comprises data from a 360-degree camera, four radars, four 60 GHz phased arrays, a 3D lidar, and two precise GPSs. The dataset contains vehicles driving during the day and night for 120 km in intercity and rural settings, with speeds up to 100 km per hour. More than one million objects were detected across all images, from trucks to bicycles. This work further includes detailed dataset statistics that prove the coverage of various situations and highlights how this dataset can enable novel machine-learning applications.
Related papers
- Advanced computer vision for extracting georeferenced vehicle trajectories from drone imagery [4.387337528923525]
This paper presents a framework for extracting georeferenced vehicle trajectories from high-altitude drone footage.
We employ state-of-the-art computer vision and deep learning to create an end-to-end pipeline.
Results demonstrate the potential of integrating drone technology with advanced computer vision for precise, cost-effective urban traffic monitoring.
arXiv Detail & Related papers (2024-11-04T14:49:01Z) - RoboSense: Large-scale Dataset and Benchmark for Multi-sensor Low-speed Autonomous Driving [62.5830455357187]
In this paper, we construct a multimodal data collection platform based on 3 main types of sensors (Camera, LiDAR and Fisheye)
A large-scale multi-sensor dataset is built, named RoboSense, to facilitate near-field scene understanding.
RoboSense contains more than 133K synchronized data with 1.4M 3D bounding box and IDs in the full $360circ$ view, forming 216K trajectories across 7.6K temporal sequences.
arXiv Detail & Related papers (2024-08-28T03:17:40Z) - HawkRover: An Autonomous mmWave Vehicular Communication Testbed with
Multi-sensor Fusion and Deep Learning [26.133092114053472]
Connected and automated vehicles (CAVs) have become a transformative technology that can change our daily life.
Currently, millimeter-wave (mmWave) bands are identified as the promising CAV connectivity solution.
While it can provide high data rate, their realization faces many challenges such as high attenuation during mmWave signal propagation and mobility management.
This study proposes an autonomous and low-cost testbed to collect extensive co-located mmWave signal and other sensors data to facilitate mmWave vehicular communications.
arXiv Detail & Related papers (2024-01-03T16:38:56Z) - The IMPTC Dataset: An Infrastructural Multi-Person Trajectory and
Context Dataset [4.413278371057897]
Inner-city intersections are among the most critical traffic areas for injury and fatal accidents.
We use an intelligent public inner-city intersection in Germany with visual sensor technology.
The resulting dataset consists of eight hours of measurement data.
arXiv Detail & Related papers (2023-07-12T13:46:20Z) - V2V4Real: A Real-world Large-scale Dataset for Vehicle-to-Vehicle
Cooperative Perception [49.7212681947463]
Vehicle-to-Vehicle (V2V) cooperative perception system has great potential to revolutionize the autonomous driving industry.
We present V2V4Real, the first large-scale real-world multi-modal dataset for V2V perception.
Our dataset covers a driving area of 410 km, comprising 20K LiDAR frames, 40K RGB frames, 240K annotated 3D bounding boxes for 5 classes, and HDMaps.
arXiv Detail & Related papers (2023-03-14T02:49:20Z) - Berlin V2X: A Machine Learning Dataset from Multiple Vehicles and Radio
Access Technologies [56.77079930521082]
We have conducted a detailed measurement campaign that paves the way to a plethora of diverse ML-based studies.
The resulting datasets offer GPS-located wireless measurements across diverse urban environments for both cellular (with two different operators) and sidelink radio access technologies.
We provide an initial analysis of the data showing some of the challenges that ML needs to overcome and the features that ML can leverage.
arXiv Detail & Related papers (2022-12-20T15:26:39Z) - aiMotive Dataset: A Multimodal Dataset for Robust Autonomous Driving
with Long-Range Perception [0.0]
This dataset consists of 176 scenes with synchronized and calibrated LiDAR, camera, and radar sensors covering a 360-degree field of view.
The collected data was captured in highway, urban, and suburban areas during daytime, night, and rain.
We trained unimodal and multimodal baseline models for 3D object detection.
arXiv Detail & Related papers (2022-11-17T10:19:59Z) - COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked
Vehicles [54.61668577827041]
We introduce COOPERNAUT, an end-to-end learning model that uses cross-vehicle perception for vision-based cooperative driving.
Our experiments on AutoCastSim suggest that our cooperative perception driving models lead to a 40% improvement in average success rate.
arXiv Detail & Related papers (2022-05-04T17:55:12Z) - The NEOLIX Open Dataset for Autonomous Driving [1.4091801425319965]
We present the NEOLIX dataset and its applica-tions in the autonomous driving area.
Our dataset includes about 30,000 frames with point cloud la-bels, and more than 600k 3D bounding boxes withannotations.
arXiv Detail & Related papers (2020-11-27T02:27:39Z) - V2VNet: Vehicle-to-Vehicle Communication for Joint Perception and
Prediction [74.42961817119283]
We use vehicle-to-vehicle (V2V) communication to improve the perception and motion forecasting performance of self-driving vehicles.
By intelligently aggregating the information received from multiple nearby vehicles, we can observe the same scene from different viewpoints.
arXiv Detail & Related papers (2020-08-17T17:58:26Z) - Drone-based RGB-Infrared Cross-Modality Vehicle Detection via
Uncertainty-Aware Learning [59.19469551774703]
Drone-based vehicle detection aims at finding the vehicle locations and categories in an aerial image.
We construct a large-scale drone-based RGB-Infrared vehicle detection dataset, termed DroneVehicle.
Our DroneVehicle collects 28, 439 RGB-Infrared image pairs, covering urban roads, residential areas, parking lots, and other scenarios from day to night.
arXiv Detail & Related papers (2020-03-05T05:29:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.