V2X-Sim: A Virtual Collaborative Perception Dataset for Autonomous
Driving
- URL: http://arxiv.org/abs/2202.08449v1
- Date: Thu, 17 Feb 2022 05:14:02 GMT
- Title: V2X-Sim: A Virtual Collaborative Perception Dataset for Autonomous
Driving
- Authors: Yiming Li, Ziyan An, Zixun Wang, Yiqi Zhong, Siheng Chen, Chen Feng
- Abstract summary: Vehicle-to-everything (V2X) denotes the collaboration between a vehicle and any entity in its surrounding.
We present the V2X-Sim dataset, the first public large-scale collaborative perception dataset in autonomous driving.
- Score: 26.961213523096948
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Vehicle-to-everything (V2X), which denotes the collaboration between a
vehicle and any entity in its surrounding, can fundamentally improve the
perception in self-driving systems. As the individual perception rapidly
advances, collaborative perception has made little progress due to the shortage
of public V2X datasets. In this work, we present the V2X-Sim dataset, the first
public large-scale collaborative perception dataset in autonomous driving.
V2X-Sim provides: 1) well-synchronized recordings from roadside infrastructure
and multiple vehicles at the intersection to enable collaborative perception,
2) multi-modality sensor streams to facilitate multi-modality perception, 3)
diverse well-annotated ground truth to support various downstream tasks
including detection, tracking, and segmentation. We seek to inspire research on
multi-agent multi-modality multi-task perception, and our virtual dataset is
promising to promote the development of collaborative perception before
realistic datasets become widely available.
Related papers
- Collaborative Perception Datasets in Autonomous Driving: A Survey [0.0]
The paper systematically analyzes a variety of datasets, comparing them based on aspects such as diversity, sensor setup, quality, public availability, and their applicability to downstream tasks.
The importance of addressing privacy and security concerns in the development of datasets is emphasized, regarding data sharing and dataset creation.
arXiv Detail & Related papers (2024-04-22T09:36:17Z) - V2X-Real: a Largs-Scale Dataset for Vehicle-to-Everything Cooperative Perception [22.3955949838171]
We propose a dataset that has a mixture of multiple vehicles and smart infrastructure simultaneously to facilitate the V2X cooperative perception development.
V2X-Real is collected using two connected automated vehicles and two smart infrastructures.
The whole dataset contains 33K LiDAR frames and 171K camera data with over 1.2M annotated bounding boxes of 10 categories in very challenging urban scenarios.
arXiv Detail & Related papers (2024-03-24T06:30:02Z) - V2V4Real: A Real-world Large-scale Dataset for Vehicle-to-Vehicle
Cooperative Perception [49.7212681947463]
Vehicle-to-Vehicle (V2V) cooperative perception system has great potential to revolutionize the autonomous driving industry.
We present V2V4Real, the first large-scale real-world multi-modal dataset for V2V perception.
Our dataset covers a driving area of 410 km, comprising 20K LiDAR frames, 40K RGB frames, 240K annotated 3D bounding boxes for 5 classes, and HDMaps.
arXiv Detail & Related papers (2023-03-14T02:49:20Z) - Generative AI-empowered Simulation for Autonomous Driving in Vehicular
Mixed Reality Metaverses [130.15554653948897]
In vehicular mixed reality (MR) Metaverse, distance between physical and virtual entities can be overcome.
Large-scale traffic and driving simulation via realistic data collection and fusion from the physical world is difficult and costly.
We propose an autonomous driving architecture, where generative AI is leveraged to synthesize unlimited conditioned traffic and driving data in simulations.
arXiv Detail & Related papers (2023-02-16T16:54:10Z) - Berlin V2X: A Machine Learning Dataset from Multiple Vehicles and Radio
Access Technologies [56.77079930521082]
We have conducted a detailed measurement campaign that paves the way to a plethora of diverse ML-based studies.
The resulting datasets offer GPS-located wireless measurements across diverse urban environments for both cellular (with two different operators) and sidelink radio access technologies.
We provide an initial analysis of the data showing some of the challenges that ML needs to overcome and the features that ML can leverage.
arXiv Detail & Related papers (2022-12-20T15:26:39Z) - DOLPHINS: Dataset for Collaborative Perception enabled Harmonious and
Interconnected Self-driving [19.66714697653504]
Vehicle-to-Everything (V2X) network has enabled collaborative perception in autonomous driving.
The lack of datasets has severely blocked the development of collaborative perception algorithms.
We release DOLPHINS: dataset for cOllaborative Perception enabled Harmonious and INterconnected Self-driving.
arXiv Detail & Related papers (2022-07-15T17:07:07Z) - CoBEVT: Cooperative Bird's Eye View Semantic Segmentation with Sparse
Transformers [36.838065731893735]
CoBEVT is the first generic multi-agent perception framework that can cooperatively generate BEV map predictions.
CoBEVT achieves state-of-the-art performance for cooperative BEV semantic segmentation.
arXiv Detail & Related papers (2022-07-05T17:59:28Z) - V2X-ViT: Vehicle-to-Everything Cooperative Perception with Vision
Transformer [58.71845618090022]
We build a holistic attention model, namely V2X-ViT, to fuse information across on-road agents.
V2X-ViT consists of alternating layers of heterogeneous multi-agent self-attention and multi-scale window self-attention.
To validate our approach, we create a large-scale V2X perception dataset.
arXiv Detail & Related papers (2022-03-20T20:18:25Z) - Euro-PVI: Pedestrian Vehicle Interactions in Dense Urban Centers [126.81938540470847]
We propose Euro-PVI, a dataset of pedestrian and bicyclist trajectories.
In this work, we develop a joint inference model that learns an expressive multi-modal shared latent space across agents in the urban scene.
We achieve state of the art results on the nuScenes and Euro-PVI datasets demonstrating the importance of capturing interactions between ego-vehicle and pedestrians (bicyclists) for accurate predictions.
arXiv Detail & Related papers (2021-06-22T15:40:21Z) - Fine-Grained Vehicle Perception via 3D Part-Guided Visual Data
Augmentation [77.60050239225086]
We propose an effective training data generation process by fitting a 3D car model with dynamic parts to vehicles in real images.
Our approach is fully automatic without any human interaction.
We present a multi-task network for VUS parsing and a multi-stream network for VHI parsing.
arXiv Detail & Related papers (2020-12-15T03:03:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.