Network-level Safety Metrics for Overall Traffic Safety Assessment: A
Case Study
- URL: http://arxiv.org/abs/2201.13229v1
- Date: Thu, 27 Jan 2022 19:07:08 GMT
- Title: Network-level Safety Metrics for Overall Traffic Safety Assessment: A
Case Study
- Authors: Xiwen Chen, Hao Wang, Abolfazl Razi, Brendan Russo, Jason Pacheco,
John Roberts, Jeffrey Wishart, Larry Head
- Abstract summary: This paper defines a new set of network-level safety metrics for the overall safety assessment of traffic flow by processing imagery taken by roadside infrastructure sensors.
An integrative analysis of the safety metrics and crash data reveals the insightful temporal and spatial correlation between the representative network-level safety metrics and the crash frequency.
- Score: 7.8191100993403495
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Driving safety analysis has recently witnessed unprecedented results due to
advances in computation frameworks, connected vehicle technology, new
generation sensors, and artificial intelligence (AI). Particularly, the recent
advances performance of deep learning (DL) methods realized higher levels of
safety for autonomous vehicles and empowered volume imagery processing for
driving safety analysis. An important application of DL methods is extracting
driving safety metrics from traffic imagery. However, the majority of current
methods use safety metrics for micro-scale analysis of individual crash
incidents or near-crash events, which does not provide insightful guidelines
for the overall network-level traffic management. On the other hand,
large-scale safety assessment efforts mainly emphasize spatial and temporal
distributions of crashes, while not always revealing the safety violations that
cause crashes. To bridge these two perspectives, we define a new set of
network-level safety metrics for the overall safety assessment of traffic flow
by processing imagery taken by roadside infrastructure sensors. An integrative
analysis of the safety metrics and crash data reveals the insightful temporal
and spatial correlation between the representative network-level safety metrics
and the crash frequency. The analysis is performed using two video cameras in
the state of Arizona along with a 5-year crash report obtained from the Arizona
Department of Transportation. The results confirm that network-level safety
metrics can be used by the traffic management teams to equip traffic monitoring
systems with advanced AI-based risk analysis, and timely traffic flow control
decisions.
Related papers
- Traffic and Safety Rule Compliance of Humans in Diverse Driving Situations [48.924085579865334]
Analyzing human data is crucial for developing autonomous systems that replicate safe driving practices.
This paper presents a comparative evaluation of human compliance with traffic and safety rules across multiple trajectory prediction datasets.
arXiv Detail & Related papers (2024-11-04T09:21:00Z) - Enhancing Road Safety: Real-Time Detection of Driver Distraction through Convolutional Neural Networks [0.0]
This study seeks to identify the most efficient model for real-time detection of driver distractions.
The ultimate aim is to incorporate the findings into vehicle safety systems, significantly boosting their capability to prevent accidents triggered by inattention.
arXiv Detail & Related papers (2024-05-28T03:34:55Z) - AccidentGPT: Accident Analysis and Prevention from V2X Environmental
Perception with Multi-modal Large Model [32.14950866838055]
AccidentGPT is a comprehensive accident analysis and prevention multi-modal large model.
For autonomous driving vehicles, we provide comprehensive environmental perception and understanding to control the vehicle and avoid collisions.
For human-driven vehicles, we offer proactive long-range safety warnings and blind-spot alerts.
Our framework supports intelligent and real-time analysis of traffic safety, encompassing pedestrian, vehicles, roads, and the environment.
arXiv Detail & Related papers (2023-12-20T16:19:47Z) - CAT: Closed-loop Adversarial Training for Safe End-to-End Driving [54.60865656161679]
Adversarial Training (CAT) is a framework for safe end-to-end driving in autonomous vehicles.
Cat aims to continuously improve the safety of driving agents by training the agent on safety-critical scenarios.
Cat can effectively generate adversarial scenarios countering the agent being trained.
arXiv Detail & Related papers (2023-10-19T02:49:31Z) - A Counterfactual Safety Margin Perspective on the Scoring of Autonomous
Vehicles' Riskiness [52.27309191283943]
This paper presents a data-driven framework for assessing the risk of different AVs' behaviors.
We propose the notion of counterfactual safety margin, which represents the minimum deviation from nominal behavior that could cause a collision.
arXiv Detail & Related papers (2023-08-02T09:48:08Z) - AI on the Road: A Comprehensive Analysis of Traffic Accidents and
Accident Detection System in Smart Cities [0.0]
This paper presents a comprehensive analysis of traffic accidents in different regions across the United States.
To address the challenges of accident detection and traffic analysis, this paper proposes a framework that uses traffic surveillance cameras and action recognition systems.
arXiv Detail & Related papers (2023-07-22T17:08:13Z) - Infrastructure-based End-to-End Learning and Prevention of Driver
Failure [68.0478623315416]
FailureNet is a recurrent neural network trained end-to-end on trajectories of both nominal and reckless drivers in a scaled miniature city.
It can accurately identify control failures, upstream perception errors, and speeding drivers, distinguishing them from nominal driving.
Compared to speed or frequency-based predictors, FailureNet's recurrent neural network structure provides improved predictive power, yielding upwards of 84% accuracy when deployed on hardware.
arXiv Detail & Related papers (2023-03-21T22:55:51Z) - Camera-Radar Perception for Autonomous Vehicles and ADAS: Concepts,
Datasets and Metrics [77.34726150561087]
This work aims to carry out a study on the current scenario of camera and radar-based perception for ADAS and autonomous vehicles.
Concepts and characteristics related to both sensors, as well as to their fusion, are presented.
We give an overview of the Deep Learning-based detection and segmentation tasks, and the main datasets, metrics, challenges, and open questions in vehicle perception.
arXiv Detail & Related papers (2023-03-08T00:48:32Z) - SafeLight: A Reinforcement Learning Method toward Collision-free Traffic
Signal Control [5.862792724739738]
One-quarter of road accidents in the U.S. happen at intersections due to problematic signal timing.
We propose a safety-enhanced residual reinforcement learning method (SafeLight)
Our method can significantly reduce collisions while increasing traffic mobility.
arXiv Detail & Related papers (2022-11-20T05:09:12Z) - Analyzing vehicle pedestrian interactions combining data cube structure
and predictive collision risk estimation model [5.73658856166614]
This study introduces a new concept of a pedestrian safety system that combines the field and the centralized processes.
The system can warn of upcoming risks immediately in the field and improve the safety of risk frequent areas by assessing the safety levels of roads without actual collisions.
arXiv Detail & Related papers (2021-07-26T23:00:56Z) - End-to-End Intersection Handling using Multi-Agent Deep Reinforcement
Learning [63.56464608571663]
Navigating through intersections is one of the main challenging tasks for an autonomous vehicle.
In this work, we focus on the implementation of a system able to navigate through intersections where only traffic signs are provided.
We propose a multi-agent system using a continuous, model-free Deep Reinforcement Learning algorithm used to train a neural network for predicting both the acceleration and the steering angle at each time step.
arXiv Detail & Related papers (2021-04-28T07:54:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.