Sensing accident-prone features in urban scenes for proactive driving
and accident prevention
- URL: http://arxiv.org/abs/2202.12788v1
- Date: Fri, 25 Feb 2022 16:05:53 GMT
- Title: Sensing accident-prone features in urban scenes for proactive driving
and accident prevention
- Authors: Sumit Mishra, Praveen Kumar Rajendran, Luiz Felipe Vecchietti, and
Dongsoo Har
- Abstract summary: This paper proposes a visual notification of accident-prone features to drivers based on real-time images obtained via dashcam.
Google Street View images around accident hotspots are used to train a family of deep convolutional neural networks (CNNs)
CNNs are able to detect accident-prone features and classify a given urban scene into an accident hotspot and a non-hotspot.
- Score: 0.5669790037378094
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In urban cities, visual information along and on roadways is likely to
distract drivers and leads to missing traffic signs and other accident-prone
features. As a solution to avoid accidents due to missing these visual cues,
this paper proposes a visual notification of accident-prone features to
drivers, based on real-time images obtained via dashcam. For this purpose,
Google Street View images around accident hotspots (areas of dense accident
occurrence) identified by accident dataset are used to train a family of deep
convolutional neural networks (CNNs). Trained CNNs are able to detect
accident-prone features and classify a given urban scene into an accident
hotspot and a non-hotspot (area of sparse accident occurrence). For given
accident hotspot, the trained CNNs can classify it into an accident hotspot
with the accuracy up to 90%. The capability of detecting accident-prone
features by the family of CNNs is analyzed by a comparative study of four
different class activation map (CAM) methods, which are used to inspect
specific accident-prone features causing the decision of CNNs, and pixel-level
object class classification. The outputs of CAM methods are processed by an
image processing pipeline to extract only the accident-prone features that are
explainable to drivers with the help of visual notification system. To prove
the efficacy of accident-prone features, an ablation study is conducted.
Ablation of accident-prone features taking 7.7%, on average, of total area in
each image sample causes up to 13.7% more chance of given area to be classified
as a non-hotspot.
Related papers
- Enhancing Vision-Language Models with Scene Graphs for Traffic Accident Understanding [45.7444555195196]
This work introduces a multi-stage, multimodal pipeline to pre-process videos of traffic accidents, encode them as scene graphs, and align this representation with vision and language modalities for accident classification.
When trained on 4 classes, our method achieves a balanced accuracy score of 57.77% on an (unbalanced) subset of the popular Detection of Traffic Anomaly benchmark.
arXiv Detail & Related papers (2024-07-08T13:15:11Z) - Visual Context-Aware Person Fall Detection [52.49277799455569]
We present a segmentation pipeline to semi-automatically separate individuals and objects in images.
Background objects such as beds, chairs, or wheelchairs can challenge fall detection systems, leading to false positive alarms.
We demonstrate that object-specific contextual transformations during training effectively mitigate this challenge.
arXiv Detail & Related papers (2024-04-11T19:06:36Z) - Abductive Ego-View Accident Video Understanding for Safe Driving
Perception [75.60000661664556]
We present MM-AU, a novel dataset for Multi-Modal Accident video Understanding.
MM-AU contains 11,727 in-the-wild ego-view accident videos, each with temporally aligned text descriptions.
We present an Abductive accident Video understanding framework for Safe Driving perception (AdVersa-SD)
arXiv Detail & Related papers (2024-03-01T10:42:52Z) - Exploring the Potential of Multi-Modal AI for Driving Hazard Prediction [18.285227911703977]
We formulate it as a task of anticipating impending accidents using a single input image captured by car dashcams.
The problem needs predicting and reasoning about future events based on uncertain observations.
To enable research in this understudied area, a new dataset named the DHPR dataset is created.
arXiv Detail & Related papers (2023-10-07T03:16:30Z) - Cognitive Accident Prediction in Driving Scenes: A Multimodality
Benchmark [77.54411007883962]
We propose a Cognitive Accident Prediction (CAP) method that explicitly leverages human-inspired cognition of text description on the visual observation and the driver attention to facilitate model training.
CAP is formulated by an attentive text-to-vision shift fusion module, an attentive scene context transfer module, and the driver attention guided accident prediction module.
We construct a new large-scale benchmark consisting of 11,727 in-the-wild accident videos with over 2.19 million frames.
arXiv Detail & Related papers (2022-12-19T11:43:02Z) - TAD: A Large-Scale Benchmark for Traffic Accidents Detection from Video
Surveillance [2.1076255329439304]
Existing datasets in traffic accidents are either small-scale, not from surveillance cameras, not open-sourced, or not built for freeway scenes.
After integration and annotation by various dimensions, a large-scale traffic accidents dataset named TAD is proposed in this work.
arXiv Detail & Related papers (2022-09-26T03:00:50Z) - Towards explainable artificial intelligence (XAI) for early anticipation
of traffic accidents [8.34084323253809]
An accident anticipation model aims to predict accidents promptly and accurately before they occur.
Existing Artificial Intelligence (AI) models of accident anticipation lack a human-interpretable explanation of their decision-making.
This paper presents a Gated Recurrent Unit (RU) network that learns maps-temporal features for the early anticipation of traffic accidents from dashcam video data.
arXiv Detail & Related papers (2021-07-31T15:53:32Z) - A model for traffic incident prediction using emergency braking data [77.34726150561087]
We address the fundamental problem of data scarcity in road traffic accident prediction by training our model on emergency braking events instead of accidents.
We present a prototype implementing a traffic incident prediction model for Germany based on emergency braking data from Mercedes-Benz vehicles.
arXiv Detail & Related papers (2021-02-12T18:17:12Z) - Computer Vision based Accident Detection for Autonomous Vehicles [0.0]
We propose a novel support system for self-driving cars that detects vehicular accidents through a dashboard camera.
The framework has been tested on a custom dataset of dashcam footage and achieves a high accident detection rate while maintaining a low false alarm rate.
arXiv Detail & Related papers (2020-12-20T08:51:10Z) - Driver Intention Anticipation Based on In-Cabin and Driving Scene
Monitoring [52.557003792696484]
We present a framework for the detection of the drivers' intention based on both in-cabin and traffic scene videos.
Our framework achieves a prediction with the accuracy of 83.98% and F1-score of 84.3%.
arXiv Detail & Related papers (2020-06-20T11:56:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.