Mitigating Vulnerable Road Users Occlusion Risk Via Collective Perception: An Empirical Analysis
- URL: http://arxiv.org/abs/2404.07753v1
- Date: Thu, 11 Apr 2024 13:54:15 GMT
- Title: Mitigating Vulnerable Road Users Occlusion Risk Via Collective Perception: An Empirical Analysis
- Authors: Vincent Albert Wolff, Edmir Xhoxhi,
- Abstract summary: We present a novel algorithm that quantifies occlusion risk based on the dynamics of both vehicles and VRUs.
Our study extends to examining the role of the Collective Perception Service (CPS) in VRU safety.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent reports from the World Health Organization highlight that Vulnerable Road Users (VRUs) have been involved in over half of the road fatalities in recent years, with occlusion risk - a scenario where VRUs are hidden from drivers' view by obstacles like parked vehicles - being a critical contributing factor. To address this, we present a novel algorithm that quantifies occlusion risk based on the dynamics of both vehicles and VRUs. This algorithm has undergone testing and evaluation using a real-world dataset from German intersections. Additionally, we introduce the concept of Maximum Tracking Loss (MTL), which measures the longest consecutive duration a VRU remains untracked by any vehicle in a given scenario. Our study extends to examining the role of the Collective Perception Service (CPS) in VRU safety. CPS enhances safety by enabling vehicles to share sensor information, thereby potentially reducing occlusion risks. Our analysis reveals that a 25% market penetration of CPS-equipped vehicles can substantially diminish occlusion risks and significantly curtail MTL. These findings demonstrate how various scenarios pose different levels of risk to VRUs and how the deployment of Collective Perception can markedly improve their safety. Furthermore, they underline the efficacy of our proposed metrics to capture occlusion risk as a safety factor.
Related papers
- CP-Guard+: A New Paradigm for Malicious Agent Detection and Defense in Collaborative Perception [53.088988929450494]
Collaborative perception (CP) is a promising method for safe connected and autonomous driving.
We propose a new paradigm for malicious agent detection that effectively identifies malicious agents at the feature level.
We also develop a robust defense method called CP-Guard+, which enhances the margin between the representations of benign and malicious features.
arXiv Detail & Related papers (2025-02-07T12:58:45Z) - Black-Box Adversarial Attack on Vision Language Models for Autonomous Driving [65.61999354218628]
We take the first step toward designing black-box adversarial attacks specifically targeting vision-language models (VLMs) in autonomous driving systems.
We propose Cascading Adversarial Disruption (CAD), which targets low-level reasoning breakdown by generating and injecting semantics.
We present Risky Scene Induction, which addresses dynamic adaptation by leveraging a surrogate VLM to understand and construct high-level risky scenarios.
arXiv Detail & Related papers (2025-01-23T11:10:02Z) - Vehicle-group-based Crash Risk Prediction and Interpretation on Highways [8.703173025279431]
This study investigates a new vehicle group based risk analysis method and explores risk evolution mechanisms considering VG features.
An impact-based vehicle grouping method is proposed to cluster vehicles into VGs by evaluating their responses to the erratic behaviors of nearby vehicles.
A Logistic Regression and a Graph Neural Network (GNN) are then employed to predict VG risks using aggregated and disaggregated VG information.
arXiv Detail & Related papers (2024-02-19T07:47:23Z) - A Safety-Adapted Loss for Pedestrian Detection in Automated Driving [13.676179470606844]
In safety-critical domains, errors by the object detector may endanger pedestrians and other vulnerable road users.
We propose a safety-aware loss variation that leverages the estimated per-pedestrian criticality scores during training.
arXiv Detail & Related papers (2024-02-05T13:16:38Z) - CAT: Closed-loop Adversarial Training for Safe End-to-End Driving [54.60865656161679]
Adversarial Training (CAT) is a framework for safe end-to-end driving in autonomous vehicles.
Cat aims to continuously improve the safety of driving agents by training the agent on safety-critical scenarios.
Cat can effectively generate adversarial scenarios countering the agent being trained.
arXiv Detail & Related papers (2023-10-19T02:49:31Z) - A Counterfactual Safety Margin Perspective on the Scoring of Autonomous
Vehicles' Riskiness [52.27309191283943]
This paper presents a data-driven framework for assessing the risk of different AVs' behaviors.
We propose the notion of counterfactual safety margin, which represents the minimum deviation from nominal behavior that could cause a collision.
arXiv Detail & Related papers (2023-08-02T09:48:08Z) - RCP-RF: A Comprehensive Road-car-pedestrian Risk Management Framework
based on Driving Risk Potential Field [1.625213292350038]
We propose a comprehensive driving risk management framework named RCP-RF based on potential field theory under Connected and Automated Vehicles (CAV) environment.
Different from existing algorithms, the motion tendency between ego and obstacle cars and the pedestrian factor are legitimately considered in the proposed framework.
Empirical studies validate the superiority of our proposed framework against state-of-the-art methods on real-world dataset NGSIM and real AV platform.
arXiv Detail & Related papers (2023-05-04T01:54:37Z) - Analyzing vehicle pedestrian interactions combining data cube structure
and predictive collision risk estimation model [5.73658856166614]
This study introduces a new concept of a pedestrian safety system that combines the field and the centralized processes.
The system can warn of upcoming risks immediately in the field and improve the safety of risk frequent areas by assessing the safety levels of roads without actual collisions.
arXiv Detail & Related papers (2021-07-26T23:00:56Z) - Cautious Adaptation For Reinforcement Learning in Safety-Critical
Settings [129.80279257258098]
Reinforcement learning (RL) in real-world safety-critical target settings like urban driving is hazardous.
We propose a "safety-critical adaptation" task setting: an agent first trains in non-safety-critical "source" environments.
We propose a solution approach, CARL, that builds on the intuition that prior experience in diverse environments equips an agent to estimate risk.
arXiv Detail & Related papers (2020-08-15T01:40:59Z) - Can Autonomous Vehicles Identify, Recover From, and Adapt to
Distribution Shifts? [104.04999499189402]
Out-of-training-distribution (OOD) scenarios are a common challenge of learning agents at deployment.
We propose an uncertainty-aware planning method, called emphrobust imitative planning (RIP)
Our method can detect and recover from some distribution shifts, reducing the overconfident and catastrophic extrapolations in OOD scenes.
We introduce an autonomous car novel-scene benchmark, textttCARNOVEL, to evaluate the robustness of driving agents to a suite of tasks with distribution shifts.
arXiv Detail & Related papers (2020-06-26T11:07:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.