Risk Scenario Generation for Autonomous Driving Systems based on Causal Bayesian Networks
- URL: http://arxiv.org/abs/2405.16063v1
- Date: Sat, 25 May 2024 05:26:55 GMT
- Title: Risk Scenario Generation for Autonomous Driving Systems based on Causal Bayesian Networks
- Authors: Jiangnan Zhao, Dehui Du, Xing Yu, Hang Li,
- Abstract summary: We propose a novel paradigm shift towards utilizing Causal Bayesian Networks (CBN) for scenario generation in Autonomous Driving Systems (ADS)
CBN is built and validated using Maryland accident data, providing a deeper insight into the myriad factors influencing autonomous driving behaviors.
An end-to-end testing framework for ADS is established utilizing the CARLA simulator.
- Score: 4.172581773205466
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Advancements in Autonomous Driving Systems (ADS) have brought significant benefits, but also raised concerns regarding their safety. Virtual tests are common practices to ensure the safety of ADS because they are more efficient and safer compared to field operational tests. However, capturing the complex dynamics of real-world driving environments and effectively generating risk scenarios for testing is challenging. In this paper, we propose a novel paradigm shift towards utilizing Causal Bayesian Networks (CBN) for scenario generation in ADS. The CBN is built and validated using Maryland accident data, providing a deeper insight into the myriad factors influencing autonomous driving behaviors. Based on the constructed CBN, we propose an algorithm that significantly enhances the process of risk scenario generation, leading to more effective and safer ADS. An end-to-end testing framework for ADS is established utilizing the CARLA simulator. Through experiments, we successfully generated 89 high-risk scenarios from 5 seed scenarios, outperforming baseline methods in terms of time and iterations required.
Related papers
- Generating Out-Of-Distribution Scenarios Using Language Models [58.47597351184034]
Large Language Models (LLMs) have shown promise in autonomous driving.
This paper introduces a framework for generating diverse Out-Of-Distribution (OOD) driving scenarios.
We evaluate our framework through extensive simulations and introduce a new "OOD-ness" metric.
arXiv Detail & Related papers (2024-11-25T16:38:17Z) - AdvDiffuser: Generating Adversarial Safety-Critical Driving Scenarios via Guided Diffusion [6.909801263560482]
AdvDiffuser is an adversarial framework for generating safety-critical driving scenarios through guided diffusion.
We show that AdvDiffuser can be applied to various tested systems with minimal warm-up episode data.
arXiv Detail & Related papers (2024-10-11T02:03:21Z) - GOOSE: Goal-Conditioned Reinforcement Learning for Safety-Critical Scenario Generation [0.14999444543328289]
Goal-conditioned Scenario Generation (GOOSE) is a goal-conditioned reinforcement learning (RL) approach that automatically generates safety-critical scenarios.
We demonstrate the effectiveness of GOOSE in generating scenarios that lead to safety-critical events.
arXiv Detail & Related papers (2024-06-06T08:59:08Z) - PAFOT: A Position-Based Approach for Finding Optimal Tests of Autonomous Vehicles [4.243926243206826]
This paper proposes PAFOT, a position-based approach testing framework.
PAFOT generates adversarial driving scenarios to expose safety violations of Automated Driving Systems.
Experiments show PAFOT can effectively generate safety-critical scenarios to crash ADSs and is able to find collisions in a short simulation time.
arXiv Detail & Related papers (2024-05-06T10:04:40Z) - Empowering Autonomous Driving with Large Language Models: A Safety Perspective [82.90376711290808]
This paper explores the integration of Large Language Models (LLMs) into Autonomous Driving systems.
LLMs are intelligent decision-makers in behavioral planning, augmented with a safety verifier shield for contextual safety learning.
We present two key studies in a simulated environment: an adaptive LLM-conditioned Model Predictive Control (MPC) and an LLM-enabled interactive behavior planning scheme with a state machine.
arXiv Detail & Related papers (2023-11-28T03:13:09Z) - CAT: Closed-loop Adversarial Training for Safe End-to-End Driving [54.60865656161679]
Adversarial Training (CAT) is a framework for safe end-to-end driving in autonomous vehicles.
Cat aims to continuously improve the safety of driving agents by training the agent on safety-critical scenarios.
Cat can effectively generate adversarial scenarios countering the agent being trained.
arXiv Detail & Related papers (2023-10-19T02:49:31Z) - A Counterfactual Safety Margin Perspective on the Scoring of Autonomous
Vehicles' Riskiness [52.27309191283943]
This paper presents a data-driven framework for assessing the risk of different AVs' behaviors.
We propose the notion of counterfactual safety margin, which represents the minimum deviation from nominal behavior that could cause a collision.
arXiv Detail & Related papers (2023-08-02T09:48:08Z) - Boundary State Generation for Testing and Improvement of Autonomous Driving Systems [8.670873561640903]
We present GENBO, a novel test generator for autonomous driving systems (ADSs) testing.
We use such boundary conditions to augment the initial training dataset and retrain the DNN model under test.
Our evaluation results show that the retrained model has, on average, up to 3x higher success rate on a separate set of evaluation tracks with respect to the original DNN model.
arXiv Detail & Related papers (2023-07-20T05:07:51Z) - Realistic Safety-critical Scenarios Search for Autonomous Driving System
via Behavior Tree [8.286351881735191]
We propose the Matrix-Fuzzer, a behavior tree-based testing framework, to automatically generate realistic safety-critical test scenarios.
Our approach is able to find the most types of safety-critical scenarios, but only generating around 30% of the total scenarios compared with the baseline algorithm.
arXiv Detail & Related papers (2023-05-11T06:53:03Z) - Generating and Characterizing Scenarios for Safety Testing of Autonomous
Vehicles [86.9067793493874]
We propose efficient mechanisms to characterize and generate testing scenarios using a state-of-the-art driving simulator.
We use our method to characterize real driving data from the Next Generation Simulation (NGSIM) project.
We rank the scenarios by defining metrics based on the complexity of avoiding accidents and provide insights into how the AV could have minimized the probability of incurring an accident.
arXiv Detail & Related papers (2021-03-12T17:00:23Z) - Can Autonomous Vehicles Identify, Recover From, and Adapt to
Distribution Shifts? [104.04999499189402]
Out-of-training-distribution (OOD) scenarios are a common challenge of learning agents at deployment.
We propose an uncertainty-aware planning method, called emphrobust imitative planning (RIP)
Our method can detect and recover from some distribution shifts, reducing the overconfident and catastrophic extrapolations in OOD scenes.
We introduce an autonomous car novel-scene benchmark, textttCARNOVEL, to evaluate the robustness of driving agents to a suite of tasks with distribution shifts.
arXiv Detail & Related papers (2020-06-26T11:07:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.