DeepAccident: A Motion and Accident Prediction Benchmark for V2X
Autonomous Driving
- URL: http://arxiv.org/abs/2304.01168v5
- Date: Sun, 17 Dec 2023 10:00:55 GMT
- Title: DeepAccident: A Motion and Accident Prediction Benchmark for V2X
Autonomous Driving
- Authors: Tianqi Wang, Sukmin Kim, Wenxuan Ji, Enze Xie, Chongjian Ge, Junsong
Chen, Zhenguo Li, Ping Luo
- Abstract summary: We propose a large-scale dataset containing diverse accident scenarios that frequently occur in real-world driving.
The proposed DeepAccident dataset includes 57K annotated frames and 285K annotated samples, approximately 7 times more than the large-scale nuScenes dataset.
- Score: 76.29141888408265
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Safety is the primary priority of autonomous driving. Nevertheless, no
published dataset currently supports the direct and explainable safety
evaluation for autonomous driving. In this work, we propose DeepAccident, a
large-scale dataset generated via a realistic simulator containing diverse
accident scenarios that frequently occur in real-world driving. The proposed
DeepAccident dataset includes 57K annotated frames and 285K annotated samples,
approximately 7 times more than the large-scale nuScenes dataset with 40k
annotated samples. In addition, we propose a new task, end-to-end motion and
accident prediction, which can be used to directly evaluate the accident
prediction ability for different autonomous driving algorithms. Furthermore,
for each scenario, we set four vehicles along with one infrastructure to record
data, thus providing diverse viewpoints for accident scenarios and enabling V2X
(vehicle-to-everything) research on perception and prediction tasks. Finally,
we present a baseline V2X model named V2XFormer that demonstrates superior
performance for motion and accident prediction and 3D object detection compared
to the single-vehicle model.
Related papers
- XLD: A Cross-Lane Dataset for Benchmarking Novel Driving View Synthesis [84.23233209017192]
This paper presents a novel driving view synthesis dataset and benchmark specifically designed for autonomous driving simulations.
The dataset is unique as it includes testing images captured by deviating from the training trajectory by 1-4 meters.
We establish the first realistic benchmark for evaluating existing NVS approaches under front-only and multi-camera settings.
arXiv Detail & Related papers (2024-06-26T14:00:21Z) - Trajectory Prediction with Observations of Variable-Length for Motion
Planning in Highway Merging scenarios [5.193470362635256]
Existing methods cannot initiate prediction for a vehicle unless observed for a fixed duration of two or more seconds.
This paper proposes a novel transformer-based trajectory prediction approach, specifically trained to handle any observation length larger than one frame.
We perform a comprehensive evaluation of the proposed method using two large-scale highway trajectory datasets.
arXiv Detail & Related papers (2023-06-08T18:03:48Z) - Deep Virtual-to-Real Distillation for Pedestrian Crossing Prediction [18.17737928566106]
We formulate a deep virtual to real distillation framework by introducing the synthetic data that can be generated conveniently.
We borrow the abundant information of pedestrian movement in synthetic videos for the pedestrian crossing prediction in real data with a simple and lightweight implementation.
State-of-the-art performance of this framework is demonstrated by exhaustive experiment analysis.
arXiv Detail & Related papers (2022-11-02T03:53:55Z) - TAD: A Large-Scale Benchmark for Traffic Accidents Detection from Video
Surveillance [2.1076255329439304]
Existing datasets in traffic accidents are either small-scale, not from surveillance cameras, not open-sourced, or not built for freeway scenes.
After integration and annotation by various dimensions, a large-scale traffic accidents dataset named TAD is proposed in this work.
arXiv Detail & Related papers (2022-09-26T03:00:50Z) - Safety-aware Motion Prediction with Unseen Vehicles for Autonomous
Driving [104.32241082170044]
We study a new task, safety-aware motion prediction with unseen vehicles for autonomous driving.
Unlike the existing trajectory prediction task for seen vehicles, we aim at predicting an occupancy map.
Our approach is the first one that can predict the existence of unseen vehicles in most cases.
arXiv Detail & Related papers (2021-09-03T13:33:33Z) - Generating and Characterizing Scenarios for Safety Testing of Autonomous
Vehicles [86.9067793493874]
We propose efficient mechanisms to characterize and generate testing scenarios using a state-of-the-art driving simulator.
We use our method to characterize real driving data from the Next Generation Simulation (NGSIM) project.
We rank the scenarios by defining metrics based on the complexity of avoiding accidents and provide insights into how the AV could have minimized the probability of incurring an accident.
arXiv Detail & Related papers (2021-03-12T17:00:23Z) - Fine-Grained Vehicle Perception via 3D Part-Guided Visual Data
Augmentation [77.60050239225086]
We propose an effective training data generation process by fitting a 3D car model with dynamic parts to vehicles in real images.
Our approach is fully automatic without any human interaction.
We present a multi-task network for VUS parsing and a multi-stream network for VHI parsing.
arXiv Detail & Related papers (2020-12-15T03:03:38Z) - Testing the Safety of Self-driving Vehicles by Simulating Perception and
Prediction [88.0416857308144]
We propose an alternative to sensor simulation, as sensor simulation is expensive and has large domain gaps.
We directly simulate the outputs of the self-driving vehicle's perception and prediction system, enabling realistic motion planning testing.
arXiv Detail & Related papers (2020-08-13T17:20:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.