Uncertainty-based Traffic Accident Anticipation with Spatio-Temporal
Relational Learning
- URL: http://arxiv.org/abs/2008.00334v1
- Date: Sat, 1 Aug 2020 20:21:48 GMT
- Title: Uncertainty-based Traffic Accident Anticipation with Spatio-Temporal
Relational Learning
- Authors: Wentao Bao and Qi Yu and Yu Kong
- Abstract summary: Traffic accident anticipation aims to predict accidents from dashcam videos as early as possible.
Current deterministic deep neural networks could be overconfident in false predictions.
We propose an uncertainty-based accident anticipation model with relational-temporal learning.
- Score: 30.59728753059457
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traffic accident anticipation aims to predict accidents from dashcam videos
as early as possible, which is critical to safety-guaranteed self-driving
systems. With cluttered traffic scenes and limited visual cues, it is of great
challenge to predict how long there will be an accident from early observed
frames. Most existing approaches are developed to learn features of
accident-relevant agents for accident anticipation, while ignoring the features
of their spatial and temporal relations. Besides, current deterministic deep
neural networks could be overconfident in false predictions, leading to high
risk of traffic accidents caused by self-driving systems. In this paper, we
propose an uncertainty-based accident anticipation model with spatio-temporal
relational learning. It sequentially predicts the probability of traffic
accident occurrence with dashcam videos. Specifically, we propose to take
advantage of graph convolution and recurrent networks for relational feature
learning, and leverage Bayesian neural networks to address the intrinsic
variability of latent relational representations. The derived uncertainty-based
ranking loss is found to significantly boost model performance by improving the
quality of relational features. In addition, we collect a new Car Crash Dataset
(CCD) for traffic accident anticipation which contains environmental attributes
and accident reasons annotations. Experimental results on both public and the
newly-compiled datasets show state-of-the-art performance of our model. Our
code and CCD dataset are available at https://github.com/Cogito2012/UString.
Related papers
- Learning Traffic Crashes as Language: Datasets, Benchmarks, and What-if Causal Analyses [76.59021017301127]
We propose a large-scale traffic crash language dataset, named CrashEvent, summarizing 19,340 real-world crash reports.
We further formulate the crash event feature learning as a novel text reasoning problem and further fine-tune various large language models (LLMs) to predict detailed accident outcomes.
Our experiments results show that our LLM-based approach not only predicts the severity of accidents but also classifies different types of accidents and predicts injury outcomes.
arXiv Detail & Related papers (2024-06-16T03:10:16Z) - Uncertainty-Aware Probabilistic Graph Neural Networks for Road-Level Traffic Accident Prediction [6.570852598591727]
We introduce the Stemporal Zero-Inflated Tweedie Graph Neural Network STZITZTDGNN -- the first uncertainty-aware graph deep learning model in road traffic accident prediction for multisteps.
Our study demonstrates that STIDGNN can effectively inform targeted road monitoring, thereby improving urban road safety strategies.
arXiv Detail & Related papers (2023-09-10T16:35:47Z) - Augmenting Ego-Vehicle for Traffic Near-Miss and Accident Classification
Dataset using Manipulating Conditional Style Translation [0.3441021278275805]
There is no difference between accident and near-miss at the time before the accident happened.
Our contribution is to redefine the accident definition and re-annotate the accident inconsistency on DADA-2000 dataset together with near-miss.
The proposed method integrates two different components: conditional style translation (CST) and separable 3-dimensional convolutional neural network (S3D)
arXiv Detail & Related papers (2023-01-06T22:04:47Z) - Cognitive Accident Prediction in Driving Scenes: A Multimodality
Benchmark [77.54411007883962]
We propose a Cognitive Accident Prediction (CAP) method that explicitly leverages human-inspired cognition of text description on the visual observation and the driver attention to facilitate model training.
CAP is formulated by an attentive text-to-vision shift fusion module, an attentive scene context transfer module, and the driver attention guided accident prediction module.
We construct a new large-scale benchmark consisting of 11,727 in-the-wild accident videos with over 2.19 million frames.
arXiv Detail & Related papers (2022-12-19T11:43:02Z) - FBLNet: FeedBack Loop Network for Driver Attention Prediction [75.83518507463226]
Nonobjective driving experience is difficult to model.
In this paper, we propose a FeedBack Loop Network (FBLNet) which attempts to model the driving experience accumulation procedure.
Under the guidance of the incremental knowledge, our model fuses the CNN feature and Transformer feature that are extracted from the input image to predict driver attention.
arXiv Detail & Related papers (2022-12-05T08:25:09Z) - Safety-aware Motion Prediction with Unseen Vehicles for Autonomous
Driving [104.32241082170044]
We study a new task, safety-aware motion prediction with unseen vehicles for autonomous driving.
Unlike the existing trajectory prediction task for seen vehicles, we aim at predicting an occupancy map.
Our approach is the first one that can predict the existence of unseen vehicles in most cases.
arXiv Detail & Related papers (2021-09-03T13:33:33Z) - Towards explainable artificial intelligence (XAI) for early anticipation
of traffic accidents [8.34084323253809]
An accident anticipation model aims to predict accidents promptly and accurately before they occur.
Existing Artificial Intelligence (AI) models of accident anticipation lack a human-interpretable explanation of their decision-making.
This paper presents a Gated Recurrent Unit (RU) network that learns maps-temporal features for the early anticipation of traffic accidents from dashcam video data.
arXiv Detail & Related papers (2021-07-31T15:53:32Z) - DRIVE: Deep Reinforced Accident Anticipation with Visual Explanation [36.350348194248014]
Traffic accident anticipation aims to accurately and promptly predict the occurrence of a future accident from dashcam videos.
Existing approaches typically focus on capturing the cues of spatial and temporal context before a future accident occurs.
We propose Deep ReInforced accident anticipation with Visual Explanation, named DRIVE.
arXiv Detail & Related papers (2021-07-21T16:33:21Z) - A Dynamic Spatial-temporal Attention Network for Early Anticipation of
Traffic Accidents [12.881094474374231]
This paper presents a dynamic spatial-temporal attention (DSTA) network for early anticipation of traffic accidents from dashcam videos.
It learns to select discriminative temporal segments of a video sequence with a module named Dynamic Temporal Attention (DTA)
The spatial-temporal relational features of accidents, along with scene appearance features, are learned jointly with a Gated Recurrent Unit (GRU) network.
arXiv Detail & Related papers (2021-06-18T15:58:53Z) - A model for traffic incident prediction using emergency braking data [77.34726150561087]
We address the fundamental problem of data scarcity in road traffic accident prediction by training our model on emergency braking events instead of accidents.
We present a prototype implementing a traffic incident prediction model for Germany based on emergency braking data from Mercedes-Benz vehicles.
arXiv Detail & Related papers (2021-02-12T18:17:12Z) - Driver Intention Anticipation Based on In-Cabin and Driving Scene
Monitoring [52.557003792696484]
We present a framework for the detection of the drivers' intention based on both in-cabin and traffic scene videos.
Our framework achieves a prediction with the accuracy of 83.98% and F1-score of 84.3%.
arXiv Detail & Related papers (2020-06-20T11:56:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.