An LLM-enhanced Multi-objective Evolutionary Search for Autonomous Driving Test Scenario Generation
- URL: http://arxiv.org/abs/2406.10857v1
- Date: Sun, 16 Jun 2024 09:05:56 GMT
- Title: An LLM-enhanced Multi-objective Evolutionary Search for Autonomous Driving Test Scenario Generation
- Authors: Haoxiang Tian, Xingshuo Han, Guoquan Wu, Yuan Zhou, Shuo Li, Jun Wei, Dan Ye, Wei Wang, Tianwei Zhang,
- Abstract summary: How to generate diverse safety-critical test scenarios is a key task for Autonomous Driving Systems (ADSs) testing.
This paper proposes LEADE, an LLM-enhanced scenario generation approach for ADS testing.
We implement and evaluate LEADE on industrial-grade full-stack ADS platform, Baidu Apollo.
- Score: 23.176669620953668
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The safety of Autonomous Driving Systems (ADSs) is significantly important for the implementation of autonomous vehicles (AVs). Therefore, ADSs must be evaluated thoroughly before their release and deployment to the public. How to generate diverse safety-critical test scenarios is a key task for ADS testing. This paper proposes LEADE, an LLM-enhanced scenario generation approach for ADS testing, which adopts the LLM-enhanced adaptive evolutionary search to generate safety-critical and diverse test scenarios. LEADE leverages LLM's ability in program understanding to better comprehend the scenario generation task, which generates high-quality scenarios of the first generation. LEADE adopts an adaptive multi-objective genetic algorithm to search for diverse safety-critical scenarios. To guide the search away from the local optima, LEADE formulates the evolutionary search into a QA task, which leverages LLM's ability in quantitative reasoning to generate differential seed scenarios to break out of the local optimal solutions. We implement and evaluate LEADE on industrial-grade full-stack ADS platform, Baidu Apollo. Experimental results show that LEADE can effectively and efficiently generate safety-critical scenarios and expose 10 diverse safety violations of Apollo. It outperforms two state-of-the-art search-based ADS testing techniques by identifying 4 new types of safety-critical scenarios on the same roads.
Related papers
- Generating Out-Of-Distribution Scenarios Using Language Models [58.47597351184034]
Large Language Models (LLMs) have shown promise in autonomous driving.
This paper introduces a framework for generating diverse Out-Of-Distribution (OOD) driving scenarios.
We evaluate our framework through extensive simulations and introduce a new "OOD-ness" metric.
arXiv Detail & Related papers (2024-11-25T16:38:17Z) - EVOLvE: Evaluating and Optimizing LLMs For Exploration [76.66831821738927]
Large language models (LLMs) remain under-studied in scenarios requiring optimal decision-making under uncertainty.
We measure LLMs' (in)ability to make optimal decisions in bandits, a state-less reinforcement learning setting relevant to many applications.
Motivated by the existence of optimal exploration algorithms, we propose efficient ways to integrate this algorithmic knowledge into LLMs.
arXiv Detail & Related papers (2024-10-08T17:54:03Z) - Multimodal Large Language Model Driven Scenario Testing for Autonomous Vehicles [6.836108615628114]
We propose OmniTester: a framework that generates realistic and diverse scenarios within a simulation environment.
In the experiments, we demonstrated the controllability and realism of our approaches in generating three types of challenging and complex scenarios.
arXiv Detail & Related papers (2024-09-10T12:12:09Z) - GOOSE: Goal-Conditioned Reinforcement Learning for Safety-Critical Scenario Generation [0.14999444543328289]
Goal-conditioned Scenario Generation (GOOSE) is a goal-conditioned reinforcement learning (RL) approach that automatically generates safety-critical scenarios.
We demonstrate the effectiveness of GOOSE in generating scenarios that lead to safety-critical events.
arXiv Detail & Related papers (2024-06-06T08:59:08Z) - Risk Scenario Generation for Autonomous Driving Systems based on Causal Bayesian Networks [4.172581773205466]
We propose a novel paradigm shift towards utilizing Causal Bayesian Networks (CBN) for scenario generation in Autonomous Driving Systems (ADS)
CBN is built and validated using Maryland accident data, providing a deeper insight into the myriad factors influencing autonomous driving behaviors.
An end-to-end testing framework for ADS is established utilizing the CARLA simulator.
arXiv Detail & Related papers (2024-05-25T05:26:55Z) - ALI-Agent: Assessing LLMs' Alignment with Human Values via Agent-based Evaluation [48.54271457765236]
Large Language Models (LLMs) can elicit unintended and even harmful content when misaligned with human values.
Current evaluation benchmarks predominantly employ expert-designed contextual scenarios to assess how well LLMs align with human values.
We propose ALI-Agent, an evaluation framework that leverages the autonomous abilities of LLM-powered agents to conduct in-depth and adaptive alignment assessments.
arXiv Detail & Related papers (2024-05-23T02:57:42Z) - PAFOT: A Position-Based Approach for Finding Optimal Tests of Autonomous Vehicles [4.243926243206826]
This paper proposes PAFOT, a position-based approach testing framework.
PAFOT generates adversarial driving scenarios to expose safety violations of Automated Driving Systems.
Experiments show PAFOT can effectively generate safety-critical scenarios to crash ADSs and is able to find collisions in a short simulation time.
arXiv Detail & Related papers (2024-05-06T10:04:40Z) - Empowering Autonomous Driving with Large Language Models: A Safety Perspective [82.90376711290808]
This paper explores the integration of Large Language Models (LLMs) into Autonomous Driving systems.
LLMs are intelligent decision-makers in behavioral planning, augmented with a safety verifier shield for contextual safety learning.
We present two key studies in a simulated environment: an adaptive LLM-conditioned Model Predictive Control (MPC) and an LLM-enabled interactive behavior planning scheme with a state machine.
arXiv Detail & Related papers (2023-11-28T03:13:09Z) - LanguageMPC: Large Language Models as Decision Makers for Autonomous
Driving [87.1164964709168]
This work employs Large Language Models (LLMs) as a decision-making component for complex autonomous driving scenarios.
Extensive experiments demonstrate that our proposed method not only consistently surpasses baseline approaches in single-vehicle tasks, but also helps handle complex driving behaviors even multi-vehicle coordination.
arXiv Detail & Related papers (2023-10-04T17:59:49Z) - Evaluating Model-free Reinforcement Learning toward Safety-critical
Tasks [70.76757529955577]
This paper revisits prior work in this scope from the perspective of state-wise safe RL.
We propose Unrolling Safety Layer (USL), a joint method that combines safety optimization and safety projection.
To facilitate further research in this area, we reproduce related algorithms in a unified pipeline and incorporate them into SafeRL-Kit.
arXiv Detail & Related papers (2022-12-12T06:30:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.