Multi-Agent Vulnerability Discovery for Autonomous Driving with Hazard
Arbitration Reward
- URL: http://arxiv.org/abs/2112.06185v1
- Date: Sun, 12 Dec 2021 08:58:32 GMT
- Title: Multi-Agent Vulnerability Discovery for Autonomous Driving with Hazard
Arbitration Reward
- Authors: Weilin Liu, Ye Mu, Chao Yu, Xuefei Ning, Zhong Cao, Yi Wu, Shuang
Liang, Huazhong Yang, Yu Wang
- Abstract summary: This work proposes a Safety Test framework by finding Av-Responsible Scenarios (STARS) based on multi-agent reinforcement learning.
STARS guides other traffic participants to produce Av-Responsible Scenarios and make the under-test driving policy misbehave.
- Score: 21.627246586543542
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Discovering hazardous scenarios is crucial in testing and further improving
driving policies. However, conducting efficient driving policy testing faces
two key challenges. On the one hand, the probability of naturally encountering
hazardous scenarios is low when testing a well-trained autonomous driving
strategy. Thus, discovering these scenarios by purely real-world road testing
is extremely costly. On the other hand, a proper determination of accident
responsibility is necessary for this task. Collecting scenarios with
wrong-attributed responsibilities will lead to an overly conservative
autonomous driving strategy. To be more specific, we aim to discover hazardous
scenarios that are autonomous-vehicle responsible (AV-responsible), i.e., the
vulnerabilities of the under-test driving policy.
To this end, this work proposes a Safety Test framework by finding
Av-Responsible Scenarios (STARS) based on multi-agent reinforcement learning.
STARS guides other traffic participants to produce Av-Responsible Scenarios and
make the under-test driving policy misbehave via introducing Hazard Arbitration
Reward (HAR). HAR enables our framework to discover diverse, complex, and
AV-responsible hazardous scenarios. Experimental results against four different
driving policies in three environments demonstrate that STARS can effectively
discover AV-responsible hazardous scenarios. These scenarios indeed correspond
to the vulnerabilities of the under-test driving policies, thus are meaningful
for their further improvements.
Related papers
- Automated and Complete Generation of Traffic Scenarios at Road Junctions Using a Multi-level Danger Definition [2.5608506499175094]
We propose an approach to derive a complete set of (potentially dangerous) abstract scenarios at any given road junction.
From these abstract scenarios, we derive exact paths that actors must follow to guide simulation-based testing.
Results show that the AV-under-test is involved in increasing percentages of unsafe behaviors in simulation.
arXiv Detail & Related papers (2024-10-09T17:23:51Z) - Work-in-Progress: Crash Course: Can (Under Attack) Autonomous Driving Beat Human Drivers? [60.51287814584477]
This paper evaluates the inherent risks in autonomous driving by examining the current landscape of AVs.
We develop specific claims highlighting the delicate balance between the advantages of AVs and potential security challenges in real-world scenarios.
arXiv Detail & Related papers (2024-05-14T09:42:21Z) - RACER: Epistemic Risk-Sensitive RL Enables Fast Driving with Fewer Crashes [57.319845580050924]
We propose a reinforcement learning framework that combines risk-sensitive control with an adaptive action space curriculum.
We show that our algorithm is capable of learning high-speed policies for a real-world off-road driving task.
arXiv Detail & Related papers (2024-05-07T23:32:36Z) - PAFOT: A Position-Based Approach for Finding Optimal Tests of Autonomous Vehicles [4.243926243206826]
This paper proposes PAFOT, a position-based approach testing framework.
PAFOT generates adversarial driving scenarios to expose safety violations of Automated Driving Systems.
Experiments show PAFOT can effectively generate safety-critical scenarios to crash ADSs and is able to find collisions in a short simulation time.
arXiv Detail & Related papers (2024-05-06T10:04:40Z) - ReMAV: Reward Modeling of Autonomous Vehicles for Finding Likely Failure
Events [1.84926694477846]
We propose a black-box testing framework that uses offline trajectories first to analyze the existing behavior of autonomous vehicles.
Our experiment shows an increase in 35, 23, 48, and 50% in the occurrences of vehicle collision, road object collision, pedestrian collision, and offroad steering events.
arXiv Detail & Related papers (2023-08-28T13:09:00Z) - A Counterfactual Safety Margin Perspective on the Scoring of Autonomous
Vehicles' Riskiness [52.27309191283943]
This paper presents a data-driven framework for assessing the risk of different AVs' behaviors.
We propose the notion of counterfactual safety margin, which represents the minimum deviation from nominal behavior that could cause a collision.
arXiv Detail & Related papers (2023-08-02T09:48:08Z) - Attacks and Faults Injection in Self-Driving Agents on the Carla
Simulator -- Experience Report [1.933681537640272]
We report on the injection of adversarial attacks and software faults in a self-driving agent running in a driving simulator.
We show that adversarial attacks and faults injected in the trained agent can lead to erroneous decisions and severely jeopardize safety.
arXiv Detail & Related papers (2022-02-25T21:46:12Z) - Pedestrian Emergence Estimation and Occlusion-Aware Risk Assessment for
Urban Autonomous Driving [0.0]
We propose a pedestrian emergence estimation and occlusion-aware risk assessment system for urban autonomous driving.
First, the proposed system utilizes available contextual information, such as visible cars and pedestrians, to estimate pedestrian emergence probabilities in occluded regions.
The proposed controller outperformed the baselines in terms of safety and comfort measures.
arXiv Detail & Related papers (2021-07-06T00:07:09Z) - Generating and Characterizing Scenarios for Safety Testing of Autonomous
Vehicles [86.9067793493874]
We propose efficient mechanisms to characterize and generate testing scenarios using a state-of-the-art driving simulator.
We use our method to characterize real driving data from the Next Generation Simulation (NGSIM) project.
We rank the scenarios by defining metrics based on the complexity of avoiding accidents and provide insights into how the AV could have minimized the probability of incurring an accident.
arXiv Detail & Related papers (2021-03-12T17:00:23Z) - Cautious Adaptation For Reinforcement Learning in Safety-Critical
Settings [129.80279257258098]
Reinforcement learning (RL) in real-world safety-critical target settings like urban driving is hazardous.
We propose a "safety-critical adaptation" task setting: an agent first trains in non-safety-critical "source" environments.
We propose a solution approach, CARL, that builds on the intuition that prior experience in diverse environments equips an agent to estimate risk.
arXiv Detail & Related papers (2020-08-15T01:40:59Z) - Can Autonomous Vehicles Identify, Recover From, and Adapt to
Distribution Shifts? [104.04999499189402]
Out-of-training-distribution (OOD) scenarios are a common challenge of learning agents at deployment.
We propose an uncertainty-aware planning method, called emphrobust imitative planning (RIP)
Our method can detect and recover from some distribution shifts, reducing the overconfident and catastrophic extrapolations in OOD scenes.
We introduce an autonomous car novel-scene benchmark, textttCARNOVEL, to evaluate the robustness of driving agents to a suite of tasks with distribution shifts.
arXiv Detail & Related papers (2020-06-26T11:07:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.