Controllable risk scenario generation from human crash data for autonomous vehicle testing
- URL: http://arxiv.org/abs/2512.07874v1
- Date: Thu, 27 Nov 2025 04:53:18 GMT
- Title: Controllable risk scenario generation from human crash data for autonomous vehicle testing
- Authors: Qiujing Lu, Xuanhan Wang, Runze Yuan, Wei Lu, Xinyi Gong, Shuo Feng,
- Abstract summary: Controllable Risk Agent Generation (CRAG) is a framework designed to unify the modeling of dominant nominal behaviors and rare safety-critical behaviors.<n>CRAG constructs a structured latent space that disentangles normal and risk-related behaviors, enabling efficient use of limited crash data.
- Score: 13.3074428571403
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ensuring the safety of autonomous vehicles (AV) requires rigorous testing under both everyday driving and rare, safety-critical conditions. A key challenge lies in simulating environment agents, including background vehicles (BVs) and vulnerable road users (VRUs), that behave realistically in nominal traffic while also exhibiting risk-prone behaviors consistent with real-world accidents. We introduce Controllable Risk Agent Generation (CRAG), a framework designed to unify the modeling of dominant nominal behaviors and rare safety-critical behaviors. CRAG constructs a structured latent space that disentangles normal and risk-related behaviors, enabling efficient use of limited crash data. By combining risk-aware latent representations with optimization-based mode-transition mechanisms, the framework allows agents to shift smoothly and plausibly from safe to risk states over extended horizons, while maintaining high fidelity in both regimes. Extensive experiments show that CRAG improves diversity compared to existing baselines, while also enabling controllable generation of risk scenarios for targeted and efficient evaluation of AV robustness.
Related papers
- Steering Externalities: Benign Activation Steering Unintentionally Increases Jailbreak Risk for Large Language Models [62.16655896700062]
Activation steering is a technique to enhance the utility of Large Language Models (LLMs)<n>We show that it unintentionally introduces critical and under-explored safety risks.<n>Experiments reveal that these interventions act as a force multiplier, creating new vulnerabilities to jailbreaks and increasing attack success rates to over 80% on standard benchmarks.
arXiv Detail & Related papers (2026-02-03T12:32:35Z) - Self-Guard: Defending Large Reasoning Models via enhanced self-reflection [54.775612141528164]
Self-Guard is a lightweight safety defense framework for Large Reasoning Models.<n>It bridges the awareness-compliance gap, achieving robust safety performance without compromising model utility.<n>Self-Guard exhibits strong generalization across diverse unseen risks and varying model scales.
arXiv Detail & Related papers (2026-01-31T13:06:11Z) - Learning from Risk: LLM-Guided Generation of Safety-Critical Scenarios with Prior Knowledge [25.50999678115561]
This paper presents a high-fidelity scenario generation framework that integrates a conditional variational autoencoder (CVAE) with a large language model (LLM)<n>Our framework substantially increases the coverage of high-risk and long-tail events, improves consistency between simulated and real-world traffic distributions, and exposes autonomous driving systems to interactions that are significantly more challenging than those produced by existing rule- or data-driven methods.
arXiv Detail & Related papers (2025-11-25T09:53:09Z) - How Much Is Too Much? Adaptive, Context-Aware Risk Detection in Naturalistic Driving [0.6299766708197883]
We propose a unified, context-aware framework that adapts labels and models over time and across drivers.<n>The framework is tested using two safety indicators, speed-weighted headway and harsh driving events, and three models: Random Forest, XGBoost, and Deep Neural Network (DNN)<n>Overall, the framework shows promise for adaptive, context-aware risk detection that can enhance real-time safety feedback and support driver-focused interventions in intelligent transportation systems.
arXiv Detail & Related papers (2025-07-26T16:24:25Z) - Automating Steering for Safe Multimodal Large Language Models [58.36932318051907]
We introduce a modular and adaptive inference-time intervention technology, AutoSteer, without requiring any fine-tuning of the underlying model.<n>AutoSteer incorporates three core components: (1) a novel Safety Awareness Score (SAS) that automatically identifies the most safety-relevant distinctions among the model's internal layers; (2) an adaptive safety prober trained to estimate the likelihood of toxic outputs from intermediate representations; and (3) a lightweight Refusal Head that selectively intervenes to modulate generation when safety risks are detected.
arXiv Detail & Related papers (2025-07-17T16:04:55Z) - RADE: Learning Risk-Adjustable Driving Environment via Multi-Agent Conditional Diffusion [17.46462636610847]
Risk- Driving Environment (RADE) is a simulation framework that generates statistically realistic and risk-adjustable traffic scenes.<n>RADE learns risk-conditioned behaviors directly from data, preserving naturalistic multi-agent interactions with controllable risk levels.<n>We validate RADE on the real-world rounD dataset, demonstrating that it preserves statistical realism across varying risk levels.
arXiv Detail & Related papers (2025-05-06T04:41:20Z) - CRASH: Challenging Reinforcement-Learning Based Adversarial Scenarios For Safety Hardening [16.305837225117607]
This paper introduces CRASH - Challenging Reinforcement-learning based Adversarial scenarios for Safety Hardening.
First CRASH can control adversarial Non Player Character (NPC) agents in an AV simulator to automatically induce collisions with the Ego vehicle.
We also propose a novel approach, that we term safety hardening, which iteratively refines the motion planner by simulating improvement scenarios against adversarial agents.
arXiv Detail & Related papers (2024-11-26T00:00:27Z) - AdvDiffuser: Generating Adversarial Safety-Critical Driving Scenarios via Guided Diffusion [6.909801263560482]
AdvDiffuser is an adversarial framework for generating safety-critical driving scenarios through guided diffusion.
We show that AdvDiffuser can be applied to various tested systems with minimal warm-up episode data.
arXiv Detail & Related papers (2024-10-11T02:03:21Z) - SAFE-SIM: Safety-Critical Closed-Loop Traffic Simulation with Diffusion-Controllable Adversaries [94.84458417662407]
We introduce SAFE-SIM, a controllable closed-loop safety-critical simulation framework.
Our approach yields two distinct advantages: 1) generating realistic long-tail safety-critical scenarios that closely reflect real-world conditions, and 2) providing controllable adversarial behavior for more comprehensive and interactive evaluations.
We validate our framework empirically using the nuScenes and nuPlan datasets across multiple planners, demonstrating improvements in both realism and controllability.
arXiv Detail & Related papers (2023-12-31T04:14:43Z) - Safety-aware Causal Representation for Trustworthy Offline Reinforcement
Learning in Autonomous Driving [33.672722472758636]
offline Reinforcement Learning(RL) approaches exhibit notable efficacy in addressing sequential decision-making problems from offline datasets.
We introduce the saFety-aware strUctured Scenario representatION ( Fusion) to facilitate the learning of a generalizable end-to-end driving policy.
Empirical evidence in various driving scenarios attests that Fusion significantly enhances the safety and generalizability of autonomous driving agents.
arXiv Detail & Related papers (2023-10-31T18:21:24Z) - A Counterfactual Safety Margin Perspective on the Scoring of Autonomous
Vehicles' Riskiness [52.27309191283943]
This paper presents a data-driven framework for assessing the risk of different AVs' behaviors.
We propose the notion of counterfactual safety margin, which represents the minimum deviation from nominal behavior that could cause a collision.
arXiv Detail & Related papers (2023-08-02T09:48:08Z) - Cautious Adaptation For Reinforcement Learning in Safety-Critical
Settings [129.80279257258098]
Reinforcement learning (RL) in real-world safety-critical target settings like urban driving is hazardous.
We propose a "safety-critical adaptation" task setting: an agent first trains in non-safety-critical "source" environments.
We propose a solution approach, CARL, that builds on the intuition that prior experience in diverse environments equips an agent to estimate risk.
arXiv Detail & Related papers (2020-08-15T01:40:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.