Safety Analysis of Autonomous Driving Systems Based on Model Learning
- URL: http://arxiv.org/abs/2211.12733v1
- Date: Wed, 23 Nov 2022 06:52:40 GMT
- Title: Safety Analysis of Autonomous Driving Systems Based on Model Learning
- Authors: Renjue Li, Tianhang Qin, Pengfei Yang, Cheng-Chao Huang, Youcheng Sun
and Lijun Zhang
- Abstract summary: We present a practical verification method for safety analysis of the autonomous driving system (ADS)
The main idea is to build a surrogate model that quantitatively depicts the behaviour of an ADS in the specified traffic scenario.
We demonstrate the utility of the proposed approach by evaluating safety properties on the state-of-the-art ADS in literature.
- Score: 16.38592243376647
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a practical verification method for safety analysis of the
autonomous driving system (ADS). The main idea is to build a surrogate model
that quantitatively depicts the behaviour of an ADS in the specified traffic
scenario. The safety properties proved in the resulting surrogate model apply
to the original ADS with a probabilistic guarantee. Furthermore, we explore the
safe and the unsafe parameter space of the traffic scenario for driving
hazards. We demonstrate the utility of the proposed approach by evaluating
safety properties on the state-of-the-art ADS in literature, with a variety of
simulated traffic scenarios.
Related papers
- GOOSE: Goal-Conditioned Reinforcement Learning for Safety-Critical Scenario Generation [0.14999444543328289]
Goal-conditioned Scenario Generation (GOOSE) is a goal-conditioned reinforcement learning (RL) approach that automatically generates safety-critical scenarios.
We demonstrate the effectiveness of GOOSE in generating scenarios that lead to safety-critical events.
arXiv Detail & Related papers (2024-06-06T08:59:08Z) - A novel framework for adaptive stress testing of autonomous vehicles in
highways [3.2112502548606825]
We propose a novel framework to explore corner cases that can result in safety concerns in a highway traffic scenario.
We develop a new reward function for DRL to guide the AST in identifying crash scenarios based on the collision probability estimate.
The proposed framework is further integrated with a new driving model enabling us to create more realistic traffic scenarios.
arXiv Detail & Related papers (2024-02-19T04:02:40Z) - SAFE-SIM: Safety-Critical Closed-Loop Traffic Simulation with Controllable Adversaries [94.84458417662407]
We introduce SAFE-SIM, a novel diffusion-based controllable closed-loop safety-critical simulation framework.
We develop a novel approach to simulate safety-critical scenarios through an adversarial term in the denoising process.
We validate our framework empirically using the NuScenes dataset, demonstrating improvements in both realism and controllability.
arXiv Detail & Related papers (2023-12-31T04:14:43Z) - Empowering Autonomous Driving with Large Language Models: A Safety Perspective [82.90376711290808]
This paper explores the integration of Large Language Models (LLMs) into Autonomous Driving systems.
LLMs are intelligent decision-makers in behavioral planning, augmented with a safety verifier shield for contextual safety learning.
We present two key studies in a simulated environment: an adaptive LLM-conditioned Model Predictive Control (MPC) and an LLM-enabled interactive behavior planning scheme with a state machine.
arXiv Detail & Related papers (2023-11-28T03:13:09Z) - CAT: Closed-loop Adversarial Training for Safe End-to-End Driving [54.60865656161679]
Adversarial Training (CAT) is a framework for safe end-to-end driving in autonomous vehicles.
Cat aims to continuously improve the safety of driving agents by training the agent on safety-critical scenarios.
Cat can effectively generate adversarial scenarios countering the agent being trained.
arXiv Detail & Related papers (2023-10-19T02:49:31Z) - A Counterfactual Safety Margin Perspective on the Scoring of Autonomous
Vehicles' Riskiness [52.27309191283943]
This paper presents a data-driven framework for assessing the risk of different AVs' behaviors.
We propose the notion of counterfactual safety margin, which represents the minimum deviation from nominal behavior that could cause a collision.
arXiv Detail & Related papers (2023-08-02T09:48:08Z) - Interpreting Safety Outcomes: Waymo's Performance Evaluation in the
Context of a Broader Determination of Safety Readiness [0.0]
This paper highlights the need for a diversified approach to safety determination that complements the analysis of observed safety outcomes with other estimation techniques.
Our discussion highlights: the presentation of a "credibility paradox" within the comparison between ADS crash data and human-derived baselines, the recognition of continuous confidence growth through in-use monitoring, and the need to supplement any aggregate statistical analysis with appropriate event-level reasoning.
arXiv Detail & Related papers (2023-06-23T14:26:40Z) - I Know You Can't See Me: Dynamic Occlusion-Aware Safety Validation of
Strategic Planners for Autonomous Vehicles Using Hypergames [12.244501203346566]
We develop a novel multi-agent dynamic occlusion risk measure for assessing situational risk.
We present a white-box, scenario-based, accelerated safety validation framework for assessing safety of strategic planners in AV.
arXiv Detail & Related papers (2021-09-20T19:38:14Z) - Generating and Characterizing Scenarios for Safety Testing of Autonomous
Vehicles [86.9067793493874]
We propose efficient mechanisms to characterize and generate testing scenarios using a state-of-the-art driving simulator.
We use our method to characterize real driving data from the Next Generation Simulation (NGSIM) project.
We rank the scenarios by defining metrics based on the complexity of avoiding accidents and provide insights into how the AV could have minimized the probability of incurring an accident.
arXiv Detail & Related papers (2021-03-12T17:00:23Z) - Can Autonomous Vehicles Identify, Recover From, and Adapt to
Distribution Shifts? [104.04999499189402]
Out-of-training-distribution (OOD) scenarios are a common challenge of learning agents at deployment.
We propose an uncertainty-aware planning method, called emphrobust imitative planning (RIP)
Our method can detect and recover from some distribution shifts, reducing the overconfident and catastrophic extrapolations in OOD scenes.
We introduce an autonomous car novel-scene benchmark, textttCARNOVEL, to evaluate the robustness of driving agents to a suite of tasks with distribution shifts.
arXiv Detail & Related papers (2020-06-26T11:07:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.