Red-Team Multi-Agent Reinforcement Learning for Emergency Braking Scenario
- URL: http://arxiv.org/abs/2507.15587v1
- Date: Mon, 21 Jul 2025 13:08:49 GMT
- Title: Red-Team Multi-Agent Reinforcement Learning for Emergency Braking Scenario
- Authors: Yinsong Chen, Kaifeng Wang, Xiaoqiang Meng, Xueyuan Li, Zirui Li, Xin Gao,
- Abstract summary: We propose a Red-Team Multi-Agent Reinforcement Learning framework, where background vehicles with interference capabilities are treated as red-team agents.<n>The framework uses a Constraint Graph Representation Markov Decision Process, ensuring that red-team vehicles comply with safety rules while continuously disrupting the autonomous vehicles.<n> Experimental results show that the proposed framework significantly impacts AVs decision-making safety and generates various corner cases.
- Score: 10.732297296923784
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current research on decision-making in safety-critical scenarios often relies on inefficient data-driven scenario generation or specific modeling approaches, which fail to capture corner cases in real-world contexts. To address this issue, we propose a Red-Team Multi-Agent Reinforcement Learning framework, where background vehicles with interference capabilities are treated as red-team agents. Through active interference and exploration, red-team vehicles can uncover corner cases outside the data distribution. The framework uses a Constraint Graph Representation Markov Decision Process, ensuring that red-team vehicles comply with safety rules while continuously disrupting the autonomous vehicles (AVs). A policy threat zone model is constructed to quantify the threat posed by red-team vehicles to AVs, inducing more extreme actions to increase the danger level of the scenario. Experimental results show that the proposed framework significantly impacts AVs decision-making safety and generates various corner cases. This method also offers a novel direction for research in safety-critical scenarios.
Related papers
- SafeMobile: Chain-level Jailbreak Detection and Automated Evaluation for Multimodal Mobile Agents [58.21223208538351]
This work explores the security issues surrounding mobile multimodal agents.<n>It attempts to construct a risk discrimination mechanism by incorporating behavioral sequence information.<n>It also designs an automated assisted assessment scheme based on a large language model.
arXiv Detail & Related papers (2025-07-01T15:10:00Z) - Black-Box Adversarial Attack on Vision Language Models for Autonomous Driving [65.61999354218628]
We take the first step toward designing black-box adversarial attacks specifically targeting vision-language models (VLMs) in autonomous driving systems.<n>We propose Cascading Adversarial Disruption (CAD), which targets low-level reasoning breakdown by generating and injecting semantics.<n>We present Risky Scene Induction, which addresses dynamic adaptation by leveraging a surrogate VLM to understand and construct high-level risky scenarios.
arXiv Detail & Related papers (2025-01-23T11:10:02Z) - CRASH: Challenging Reinforcement-Learning Based Adversarial Scenarios For Safety Hardening [16.305837225117607]
This paper introduces CRASH - Challenging Reinforcement-learning based Adversarial scenarios for Safety Hardening.
First CRASH can control adversarial Non Player Character (NPC) agents in an AV simulator to automatically induce collisions with the Ego vehicle.
We also propose a novel approach, that we term safety hardening, which iteratively refines the motion planner by simulating improvement scenarios against adversarial agents.
arXiv Detail & Related papers (2024-11-26T00:00:27Z) - A novel framework for adaptive stress testing of autonomous vehicles in
highways [3.2112502548606825]
We propose a novel framework to explore corner cases that can result in safety concerns in a highway traffic scenario.
We develop a new reward function for DRL to guide the AST in identifying crash scenarios based on the collision probability estimate.
The proposed framework is further integrated with a new driving model enabling us to create more realistic traffic scenarios.
arXiv Detail & Related papers (2024-02-19T04:02:40Z) - SAFE-SIM: Safety-Critical Closed-Loop Traffic Simulation with Diffusion-Controllable Adversaries [94.84458417662407]
We introduce SAFE-SIM, a controllable closed-loop safety-critical simulation framework.
Our approach yields two distinct advantages: 1) generating realistic long-tail safety-critical scenarios that closely reflect real-world conditions, and 2) providing controllable adversarial behavior for more comprehensive and interactive evaluations.
We validate our framework empirically using the nuScenes and nuPlan datasets across multiple planners, demonstrating improvements in both realism and controllability.
arXiv Detail & Related papers (2023-12-31T04:14:43Z) - Safeguarded Progress in Reinforcement Learning: Safe Bayesian
Exploration for Control Policy Synthesis [63.532413807686524]
This paper addresses the problem of maintaining safety during training in Reinforcement Learning (RL)
We propose a new architecture that handles the trade-off between efficient progress and safety during exploration.
arXiv Detail & Related papers (2023-12-18T16:09:43Z) - Generating and Explaining Corner Cases Using Learnt Probabilistic Lane
Graphs [5.309950889075669]
We introduce Probabilistic Lane Graphs (PLGs) to describe a finite set of lane positions and directions in which vehicles might travel.
The structure of PLGs is learnt directly from historic traffic data.
We use reinforcement learning techniques to modify this policy to generate realistic and explainable corner case scenarios.
arXiv Detail & Related papers (2023-08-25T20:17:49Z) - A Counterfactual Safety Margin Perspective on the Scoring of Autonomous
Vehicles' Riskiness [52.27309191283943]
This paper presents a data-driven framework for assessing the risk of different AVs' behaviors.
We propose the notion of counterfactual safety margin, which represents the minimum deviation from nominal behavior that could cause a collision.
arXiv Detail & Related papers (2023-08-02T09:48:08Z) - RCP-RF: A Comprehensive Road-car-pedestrian Risk Management Framework
based on Driving Risk Potential Field [1.625213292350038]
We propose a comprehensive driving risk management framework named RCP-RF based on potential field theory under Connected and Automated Vehicles (CAV) environment.
Different from existing algorithms, the motion tendency between ego and obstacle cars and the pedestrian factor are legitimately considered in the proposed framework.
Empirical studies validate the superiority of our proposed framework against state-of-the-art methods on real-world dataset NGSIM and real AV platform.
arXiv Detail & Related papers (2023-05-04T01:54:37Z) - Transferable Deep Reinforcement Learning Framework for Autonomous
Vehicles with Joint Radar-Data Communications [69.24726496448713]
We propose an intelligent optimization framework based on the Markov Decision Process (MDP) to help the AV make optimal decisions.
We then develop an effective learning algorithm leveraging recent advances of deep reinforcement learning techniques to find the optimal policy for the AV.
We show that the proposed transferable deep reinforcement learning framework reduces the obstacle miss detection probability by the AV up to 67% compared to other conventional deep reinforcement learning approaches.
arXiv Detail & Related papers (2021-05-28T08:45:37Z) - Corner Case Generation and Analysis for Safety Assessment of Autonomous
Vehicles [3.673699859949693]
A unified framework is proposed to generate corner cases for the decision-making systems.
Deep reinforcement learning techniques are applied to learn the behavior policy of background vehicles.
With the learned policy, BVs will behave and interact with the CAVs more aggressively, resulting in more corner cases.
arXiv Detail & Related papers (2021-02-06T02:48:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.