I Know You Can't See Me: Dynamic Occlusion-Aware Safety Validation of
Strategic Planners for Autonomous Vehicles Using Hypergames
- URL: http://arxiv.org/abs/2109.09807v1
- Date: Mon, 20 Sep 2021 19:38:14 GMT
- Title: I Know You Can't See Me: Dynamic Occlusion-Aware Safety Validation of
Strategic Planners for Autonomous Vehicles Using Hypergames
- Authors: Maximilian Kahn, Atrisha Sarkar and Krzysztof Czarnecki
- Abstract summary: We develop a novel multi-agent dynamic occlusion risk measure for assessing situational risk.
We present a white-box, scenario-based, accelerated safety validation framework for assessing safety of strategic planners in AV.
- Score: 12.244501203346566
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A particular challenge for both autonomous and human driving is dealing with
risk associated with dynamic occlusion, i.e., occlusion caused by other
vehicles in traffic. Based on the theory of hypergames, we develop a novel
multi-agent dynamic occlusion risk (DOR) measure for assessing situational risk
in dynamic occlusion scenarios. Furthermore, we present a white-box,
scenario-based, accelerated safety validation framework for assessing safety of
strategic planners in AV. Based on evaluation over a large naturalistic
database, our proposed validation method achieves a 4000% speedup compared to
direct validation on naturalistic data, a more diverse coverage, and ability to
generalize beyond the dataset and generate commonly observed dynamic occlusion
crashes in traffic in an automated manner.
Related papers
- INSIGHT: Enhancing Autonomous Driving Safety through Vision-Language Models on Context-Aware Hazard Detection and Edge Case Evaluation [7.362380225654904]
INSIGHT is a hierarchical vision-language model (VLM) framework designed to enhance hazard detection and edge-case evaluation.
By using multimodal data fusion, our approach integrates semantic and visual representations, enabling precise interpretation of driving scenarios.
Experimental results on the BDD100K dataset demonstrate a substantial improvement in hazard prediction straightforwardness and accuracy over existing models.
arXiv Detail & Related papers (2025-02-01T01:43:53Z) - Black-Box Adversarial Attack on Vision Language Models for Autonomous Driving [65.61999354218628]
We take the first step toward designing black-box adversarial attacks specifically targeting vision-language models (VLMs) in autonomous driving systems.
We propose Cascading Adversarial Disruption (CAD), which targets low-level reasoning breakdown by generating and injecting semantics.
We present Risky Scene Induction, which addresses dynamic adaptation by leveraging a surrogate VLM to understand and construct high-level risky scenarios.
arXiv Detail & Related papers (2025-01-23T11:10:02Z) - DeepMF: Deep Motion Factorization for Closed-Loop Safety-Critical Driving Scenario Simulation [11.059102404333885]
Safety-critical traffic scenarios are of great practical relevance to evaluating the robustness of autonomous driving systems.
Existing algorithms for generating safety-critical scenarios rely on snippets of previously recorded traffic events.
In this paper, we propose the Deep Motion Factorization framework, which extends static safety-critical driving scenario generation to closed-loop and interactive adversarial traffic simulation.
arXiv Detail & Related papers (2024-12-23T11:30:24Z) - CRASH: Challenging Reinforcement-Learning Based Adversarial Scenarios For Safety Hardening [16.305837225117607]
This paper introduces CRASH - Challenging Reinforcement-learning based Adversarial scenarios for Safety Hardening.
First CRASH can control adversarial Non Player Character (NPC) agents in an AV simulator to automatically induce collisions with the Ego vehicle.
We also propose a novel approach, that we term safety hardening, which iteratively refines the motion planner by simulating improvement scenarios against adversarial agents.
arXiv Detail & Related papers (2024-11-26T00:00:27Z) - Traffic and Safety Rule Compliance of Humans in Diverse Driving Situations [48.924085579865334]
Analyzing human data is crucial for developing autonomous systems that replicate safe driving practices.
This paper presents a comparative evaluation of human compliance with traffic and safety rules across multiple trajectory prediction datasets.
arXiv Detail & Related papers (2024-11-04T09:21:00Z) - A Safe Self-evolution Algorithm for Autonomous Driving Based on Data-Driven Risk Quantification Model [14.398857940603495]
This paper proposes a safe self-evolution algorithm for autonomous driving based on data-driven risk quantification model.
To prevent the impact of over-conservative safety guarding policies on the self-evolution capability of the algorithm, a safety-evolutionary decision-control integration algorithm with adjustable safety limits is proposed.
arXiv Detail & Related papers (2024-08-23T02:52:35Z) - SAFE-SIM: Safety-Critical Closed-Loop Traffic Simulation with Diffusion-Controllable Adversaries [94.84458417662407]
We introduce SAFE-SIM, a controllable closed-loop safety-critical simulation framework.
Our approach yields two distinct advantages: 1) generating realistic long-tail safety-critical scenarios that closely reflect real-world conditions, and 2) providing controllable adversarial behavior for more comprehensive and interactive evaluations.
We validate our framework empirically using the nuScenes and nuPlan datasets across multiple planners, demonstrating improvements in both realism and controllability.
arXiv Detail & Related papers (2023-12-31T04:14:43Z) - A Counterfactual Safety Margin Perspective on the Scoring of Autonomous
Vehicles' Riskiness [52.27309191283943]
This paper presents a data-driven framework for assessing the risk of different AVs' behaviors.
We propose the notion of counterfactual safety margin, which represents the minimum deviation from nominal behavior that could cause a collision.
arXiv Detail & Related papers (2023-08-02T09:48:08Z) - Generating and Characterizing Scenarios for Safety Testing of Autonomous
Vehicles [86.9067793493874]
We propose efficient mechanisms to characterize and generate testing scenarios using a state-of-the-art driving simulator.
We use our method to characterize real driving data from the Next Generation Simulation (NGSIM) project.
We rank the scenarios by defining metrics based on the complexity of avoiding accidents and provide insights into how the AV could have minimized the probability of incurring an accident.
arXiv Detail & Related papers (2021-03-12T17:00:23Z) - Risk-Sensitive Sequential Action Control with Multi-Modal Human
Trajectory Forecasting for Safe Crowd-Robot Interaction [55.569050872780224]
We present an online framework for safe crowd-robot interaction based on risk-sensitive optimal control, wherein the risk is modeled by the entropic risk measure.
Our modular approach decouples the crowd-robot interaction into learning-based prediction and model-based control.
A simulation study and a real-world experiment show that the proposed framework can accomplish safe and efficient navigation while avoiding collisions with more than 50 humans in the scene.
arXiv Detail & Related papers (2020-09-12T02:02:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.