An LLM-enhanced Multi-objective Evolutionary Search for Autonomous Driving Test Scenario Generation
- URL: http://arxiv.org/abs/2406.10857v1
- Date: Sun, 16 Jun 2024 09:05:56 GMT
- Title: An LLM-enhanced Multi-objective Evolutionary Search for Autonomous Driving Test Scenario Generation
- Authors: Haoxiang Tian, Xingshuo Han, Guoquan Wu, Yuan Zhou, Shuo Li, Jun Wei, Dan Ye, Wei Wang, Tianwei Zhang,
- Abstract summary: How to generate diverse safety-critical test scenarios is a key task for Autonomous Driving Systems (ADSs) testing.
This paper proposes LEADE, an LLM-enhanced scenario generation approach for ADS testing.
We implement and evaluate LEADE on industrial-grade full-stack ADS platform, Baidu Apollo.
- Score: 23.176669620953668
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The safety of Autonomous Driving Systems (ADSs) is significantly important for the implementation of autonomous vehicles (AVs). Therefore, ADSs must be evaluated thoroughly before their release and deployment to the public. How to generate diverse safety-critical test scenarios is a key task for ADS testing. This paper proposes LEADE, an LLM-enhanced scenario generation approach for ADS testing, which adopts the LLM-enhanced adaptive evolutionary search to generate safety-critical and diverse test scenarios. LEADE leverages LLM's ability in program understanding to better comprehend the scenario generation task, which generates high-quality scenarios of the first generation. LEADE adopts an adaptive multi-objective genetic algorithm to search for diverse safety-critical scenarios. To guide the search away from the local optima, LEADE formulates the evolutionary search into a QA task, which leverages LLM's ability in quantitative reasoning to generate differential seed scenarios to break out of the local optimal solutions. We implement and evaluate LEADE on industrial-grade full-stack ADS platform, Baidu Apollo. Experimental results show that LEADE can effectively and efficiently generate safety-critical scenarios and expose 10 diverse safety violations of Apollo. It outperforms two state-of-the-art search-based ADS testing techniques by identifying 4 new types of safety-critical scenarios on the same roads.
Related papers
- LLM-attacker: Enhancing Closed-loop Adversarial Scenario Generation for Autonomous Driving with Large Language Models [39.139025989575686]
AClosed-loop adversarial scenario generation framework leveraging large language models (LLMs)
adversarial scenario generation methods are developed, in which behaviors of traffic participants are manipulated to induce safety-critical events.
LLMs-attacker can create more dangerous scenarios than other methods, and the ADS trained with it achieves a collision rate half that of training with normal scenarios.
arXiv Detail & Related papers (2025-01-27T08:18:52Z) - MARL-OT: Multi-Agent Reinforcement Learning Guided Online Fuzzing to Detect Safety Violation in Autonomous Driving Systems [1.1677228160050082]
This paper introduces MARL-OT, a scalable framework that leverages MARL to detect safety violations of Autonomous Driving Systems (ADSs)
MARL-OT employs MARL for high-level guidance, triggering various dangerous scenarios for the rule-based online fuzzer to explore potential safety violations of ADSs.
Our approach improves the detected safety violation rate by up to 136.2% compared to the state-of-the-art (SOTA) testing technique.
arXiv Detail & Related papers (2025-01-24T12:34:04Z) - Black-Box Adversarial Attack on Vision Language Models for Autonomous Driving [65.61999354218628]
We take the first step toward designing black-box adversarial attacks specifically targeting vision-language models (VLMs) in autonomous driving systems.
We propose Cascading Adversarial Disruption (CAD), which targets low-level reasoning breakdown by generating and injecting semantics.
We present Risky Scene Induction, which addresses dynamic adaptation by leveraging a surrogate VLM to understand and construct high-level risky scenarios.
arXiv Detail & Related papers (2025-01-23T11:10:02Z) - SimADFuzz: Simulation-Feedback Fuzz Testing for Autonomous Driving Systems [5.738863204900633]
SimADFuzz is a novel framework designed to generate high-quality scenarios that reveal violations in autonomous driving systems.
SimADFuzz employs violation prediction models, which evaluate the likelihood of ADS violations, to optimize scenario selection.
Comprehensive experiments demonstrate that SimADFuzz outperforms state-of-the-art fuzzers by identifying 32 more unique violations.
arXiv Detail & Related papers (2024-12-18T12:49:57Z) - Generating Out-Of-Distribution Scenarios Using Language Models [58.47597351184034]
Large Language Models (LLMs) have shown promise in autonomous driving.
This paper introduces a framework for generating diverse Out-Of-Distribution (OOD) driving scenarios.
We evaluate our framework through extensive simulations and introduce a new "OOD-ness" metric.
arXiv Detail & Related papers (2024-11-25T16:38:17Z) - EVOLvE: Evaluating and Optimizing LLMs For Exploration [76.66831821738927]
Large language models (LLMs) remain under-studied in scenarios requiring optimal decision-making under uncertainty.
We measure LLMs' (in)ability to make optimal decisions in bandits, a state-less reinforcement learning setting relevant to many applications.
Motivated by the existence of optimal exploration algorithms, we propose efficient ways to integrate this algorithmic knowledge into LLMs.
arXiv Detail & Related papers (2024-10-08T17:54:03Z) - Multimodal Large Language Model Driven Scenario Testing for Autonomous Vehicles [6.836108615628114]
We propose OmniTester: a framework that generates realistic and diverse scenarios within a simulation environment.
In the experiments, we demonstrated the controllability and realism of our approaches in generating three types of challenging and complex scenarios.
arXiv Detail & Related papers (2024-09-10T12:12:09Z) - GOOSE: Goal-Conditioned Reinforcement Learning for Safety-Critical Scenario Generation [0.14999444543328289]
Goal-conditioned Scenario Generation (GOOSE) is a goal-conditioned reinforcement learning (RL) approach that automatically generates safety-critical scenarios.
We demonstrate the effectiveness of GOOSE in generating scenarios that lead to safety-critical events.
arXiv Detail & Related papers (2024-06-06T08:59:08Z) - Risk Scenario Generation for Autonomous Driving Systems based on Causal Bayesian Networks [4.172581773205466]
We propose a novel paradigm shift towards utilizing Causal Bayesian Networks (CBN) for scenario generation in Autonomous Driving Systems (ADS)
CBN is built and validated using Maryland accident data, providing a deeper insight into the myriad factors influencing autonomous driving behaviors.
An end-to-end testing framework for ADS is established utilizing the CARLA simulator.
arXiv Detail & Related papers (2024-05-25T05:26:55Z) - ALI-Agent: Assessing LLMs' Alignment with Human Values via Agent-based Evaluation [48.54271457765236]
Large Language Models (LLMs) can elicit unintended and even harmful content when misaligned with human values.
Current evaluation benchmarks predominantly employ expert-designed contextual scenarios to assess how well LLMs align with human values.
We propose ALI-Agent, an evaluation framework that leverages the autonomous abilities of LLM-powered agents to conduct in-depth and adaptive alignment assessments.
arXiv Detail & Related papers (2024-05-23T02:57:42Z) - PAFOT: A Position-Based Approach for Finding Optimal Tests of Autonomous Vehicles [4.243926243206826]
This paper proposes PAFOT, a position-based approach testing framework.
PAFOT generates adversarial driving scenarios to expose safety violations of Automated Driving Systems.
Experiments show PAFOT can effectively generate safety-critical scenarios to crash ADSs and is able to find collisions in a short simulation time.
arXiv Detail & Related papers (2024-05-06T10:04:40Z) - Empowering Autonomous Driving with Large Language Models: A Safety Perspective [82.90376711290808]
This paper explores the integration of Large Language Models (LLMs) into Autonomous Driving systems.
LLMs are intelligent decision-makers in behavioral planning, augmented with a safety verifier shield for contextual safety learning.
We present two key studies in a simulated environment: an adaptive LLM-conditioned Model Predictive Control (MPC) and an LLM-enabled interactive behavior planning scheme with a state machine.
arXiv Detail & Related papers (2023-11-28T03:13:09Z) - CAT: Closed-loop Adversarial Training for Safe End-to-End Driving [54.60865656161679]
Adversarial Training (CAT) is a framework for safe end-to-end driving in autonomous vehicles.
Cat aims to continuously improve the safety of driving agents by training the agent on safety-critical scenarios.
Cat can effectively generate adversarial scenarios countering the agent being trained.
arXiv Detail & Related papers (2023-10-19T02:49:31Z) - LanguageMPC: Large Language Models as Decision Makers for Autonomous
Driving [87.1164964709168]
This work employs Large Language Models (LLMs) as a decision-making component for complex autonomous driving scenarios.
Extensive experiments demonstrate that our proposed method not only consistently surpasses baseline approaches in single-vehicle tasks, but also helps handle complex driving behaviors even multi-vehicle coordination.
arXiv Detail & Related papers (2023-10-04T17:59:49Z) - Evaluating Model-free Reinforcement Learning toward Safety-critical
Tasks [70.76757529955577]
This paper revisits prior work in this scope from the perspective of state-wise safe RL.
We propose Unrolling Safety Layer (USL), a joint method that combines safety optimization and safety projection.
To facilitate further research in this area, we reproduce related algorithms in a unified pipeline and incorporate them into SafeRL-Kit.
arXiv Detail & Related papers (2022-12-12T06:30:17Z) - Generating and Characterizing Scenarios for Safety Testing of Autonomous
Vehicles [86.9067793493874]
We propose efficient mechanisms to characterize and generate testing scenarios using a state-of-the-art driving simulator.
We use our method to characterize real driving data from the Next Generation Simulation (NGSIM) project.
We rank the scenarios by defining metrics based on the complexity of avoiding accidents and provide insights into how the AV could have minimized the probability of incurring an accident.
arXiv Detail & Related papers (2021-03-12T17:00:23Z) - AdvSim: Generating Safety-Critical Scenarios for Self-Driving Vehicles [76.46575807165729]
We propose AdvSim, an adversarial framework to generate safety-critical scenarios for any LiDAR-based autonomy system.
By simulating directly from sensor data, we obtain adversarial scenarios that are safety-critical for the full autonomy stack.
arXiv Detail & Related papers (2021-01-16T23:23:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.