DRIVE: Deep Reinforced Accident Anticipation with Visual Explanation
- URL: http://arxiv.org/abs/2107.10189v1
- Date: Wed, 21 Jul 2021 16:33:21 GMT
- Title: DRIVE: Deep Reinforced Accident Anticipation with Visual Explanation
- Authors: Wentao Bao, Qi Yu, Yu Kong
- Abstract summary: Traffic accident anticipation aims to accurately and promptly predict the occurrence of a future accident from dashcam videos.
Existing approaches typically focus on capturing the cues of spatial and temporal context before a future accident occurs.
We propose Deep ReInforced accident anticipation with Visual Explanation, named DRIVE.
- Score: 36.350348194248014
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traffic accident anticipation aims to accurately and promptly predict the
occurrence of a future accident from dashcam videos, which is vital for a
safety-guaranteed self-driving system. To encourage an early and accurate
decision, existing approaches typically focus on capturing the cues of spatial
and temporal context before a future accident occurs. However, their
decision-making lacks visual explanation and ignores the dynamic interaction
with the environment. In this paper, we propose Deep ReInforced accident
anticipation with Visual Explanation, named DRIVE. The method simulates both
the bottom-up and top-down visual attention mechanism in a dashcam observation
environment so that the decision from the proposed stochastic multi-task agent
can be visually explained by attentive regions. Moreover, the proposed dense
anticipation reward and sparse fixation reward are effective in training the
DRIVE model with our improved reinforcement learning algorithm. Experimental
results show that the DRIVE model achieves state-of-the-art performance on
multiple real-world traffic accident datasets. The code and pre-trained model
will be available upon paper acceptance.
Related papers
- Implicit Occupancy Flow Fields for Perception and Prediction in
Self-Driving [68.95178518732965]
A self-driving vehicle (SDV) must be able to perceive its surroundings and predict the future behavior of other traffic participants.
Existing works either perform object detection followed by trajectory of the detected objects, or predict dense occupancy and flow grids for the whole scene.
This motivates our unified approach to perception and future prediction that implicitly represents occupancy and flow over time with a single neural network.
arXiv Detail & Related papers (2023-08-02T23:39:24Z) - A Memory-Augmented Multi-Task Collaborative Framework for Unsupervised
Traffic Accident Detection in Driving Videos [22.553356096143734]
We propose a novel memory-augmented multi-task collaborative framework (MAMTCF) for unsupervised traffic accident detection in driving videos.
Our method can more accurately detect both ego-involved and non-ego accidents by simultaneously modeling appearance changes and object motions in video frames.
arXiv Detail & Related papers (2023-07-27T01:45:13Z) - Cognitive Accident Prediction in Driving Scenes: A Multimodality
Benchmark [77.54411007883962]
We propose a Cognitive Accident Prediction (CAP) method that explicitly leverages human-inspired cognition of text description on the visual observation and the driver attention to facilitate model training.
CAP is formulated by an attentive text-to-vision shift fusion module, an attentive scene context transfer module, and the driver attention guided accident prediction module.
We construct a new large-scale benchmark consisting of 11,727 in-the-wild accident videos with over 2.19 million frames.
arXiv Detail & Related papers (2022-12-19T11:43:02Z) - FBLNet: FeedBack Loop Network for Driver Attention Prediction [75.83518507463226]
Nonobjective driving experience is difficult to model.
In this paper, we propose a FeedBack Loop Network (FBLNet) which attempts to model the driving experience accumulation procedure.
Under the guidance of the incremental knowledge, our model fuses the CNN feature and Transformer feature that are extracted from the input image to predict driver attention.
arXiv Detail & Related papers (2022-12-05T08:25:09Z) - Safety-aware Motion Prediction with Unseen Vehicles for Autonomous
Driving [104.32241082170044]
We study a new task, safety-aware motion prediction with unseen vehicles for autonomous driving.
Unlike the existing trajectory prediction task for seen vehicles, we aim at predicting an occupancy map.
Our approach is the first one that can predict the existence of unseen vehicles in most cases.
arXiv Detail & Related papers (2021-09-03T13:33:33Z) - Towards explainable artificial intelligence (XAI) for early anticipation
of traffic accidents [8.34084323253809]
An accident anticipation model aims to predict accidents promptly and accurately before they occur.
Existing Artificial Intelligence (AI) models of accident anticipation lack a human-interpretable explanation of their decision-making.
This paper presents a Gated Recurrent Unit (RU) network that learns maps-temporal features for the early anticipation of traffic accidents from dashcam video data.
arXiv Detail & Related papers (2021-07-31T15:53:32Z) - A Dynamic Spatial-temporal Attention Network for Early Anticipation of
Traffic Accidents [12.881094474374231]
This paper presents a dynamic spatial-temporal attention (DSTA) network for early anticipation of traffic accidents from dashcam videos.
It learns to select discriminative temporal segments of a video sequence with a module named Dynamic Temporal Attention (DTA)
The spatial-temporal relational features of accidents, along with scene appearance features, are learned jointly with a Gated Recurrent Unit (GRU) network.
arXiv Detail & Related papers (2021-06-18T15:58:53Z) - Uncertainty-based Traffic Accident Anticipation with Spatio-Temporal
Relational Learning [30.59728753059457]
Traffic accident anticipation aims to predict accidents from dashcam videos as early as possible.
Current deterministic deep neural networks could be overconfident in false predictions.
We propose an uncertainty-based accident anticipation model with relational-temporal learning.
arXiv Detail & Related papers (2020-08-01T20:21:48Z) - Driver Intention Anticipation Based on In-Cabin and Driving Scene
Monitoring [52.557003792696484]
We present a framework for the detection of the drivers' intention based on both in-cabin and traffic scene videos.
Our framework achieves a prediction with the accuracy of 83.98% and F1-score of 84.3%.
arXiv Detail & Related papers (2020-06-20T11:56:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.