Enhancing Autonomous Vehicle Training with Language Model Integration and Critical Scenario Generation
- URL: http://arxiv.org/abs/2404.08570v1
- Date: Fri, 12 Apr 2024 16:13:10 GMT
- Title: Enhancing Autonomous Vehicle Training with Language Model Integration and Critical Scenario Generation
- Authors: Hanlin Tian, Kethan Reddy, Yuxiang Feng, Mohammed Quddus, Yiannis Demiris, Panagiotis Angeloudis,
- Abstract summary: CRITICAL is a novel closed-loop framework for autonomous vehicle (AV) training and testing.
The framework achieves this by integrating real-world traffic dynamics, driving behavior analysis, surrogate safety measures, and an optional Large Language Model (LLM) component.
- Score: 32.02261963851354
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper introduces CRITICAL, a novel closed-loop framework for autonomous vehicle (AV) training and testing. CRITICAL stands out for its ability to generate diverse scenarios, focusing on critical driving situations that target specific learning and performance gaps identified in the Reinforcement Learning (RL) agent. The framework achieves this by integrating real-world traffic dynamics, driving behavior analysis, surrogate safety measures, and an optional Large Language Model (LLM) component. It is proven that the establishment of a closed feedback loop between the data generation pipeline and the training process can enhance the learning rate during training, elevate overall system performance, and augment safety resilience. Our evaluations, conducted using the Proximal Policy Optimization (PPO) and the HighwayEnv simulation environment, demonstrate noticeable performance improvements with the integration of critical case generation and LLM analysis, indicating CRITICAL's potential to improve the robustness of AV systems and streamline the generation of critical scenarios. This ultimately serves to hasten the development of AV agents, expand the general scope of RL training, and ameliorate validation efforts for AV safety.
Related papers
- Robust Deep Reinforcement Learning Through Adversarial Attacks and
Training : A Survey [8.463282079069362]
Deep Reinforcement Learning (DRL) is an approach for training autonomous agents across various complex environments.
It remains susceptible to minor conditions variations, raising concerns about its reliability in real-world applications.
A way to improve robustness of DRL to unknown changes in the conditions is through Adversarial Training.
arXiv Detail & Related papers (2024-03-01T10:16:46Z) - Adaptive Testing Environment Generation for Connected and Automated
Vehicles with Dense Reinforcement Learning [7.6589102528398065]
We develop an adaptive testing environment that bolsters evaluation robustness by incorporating multiple surrogate models.
We propose the dense reinforcement learning method and devise a new adaptive policy with high sample efficiency.
arXiv Detail & Related papers (2024-02-29T15:42:33Z) - Empowering Autonomous Driving with Large Language Models: A Safety Perspective [82.90376711290808]
This paper explores the integration of Large Language Models (LLMs) into Autonomous Driving systems.
LLMs are intelligent decision-makers in behavioral planning, augmented with a safety verifier shield for contextual safety learning.
We present two key studies in a simulated environment: an adaptive LLM-conditioned Model Predictive Control (MPC) and an LLM-enabled interactive behavior planning scheme with a state machine.
arXiv Detail & Related papers (2023-11-28T03:13:09Z) - Continual Driving Policy Optimization with Closed-Loop Individualized
Curricula [3.171483862183451]
We develop a continuous driving policy optimization framework featuring Closed-Loop Individualized Curricula (CLIC)
CLIC frames AV Evaluation as a collision prediction task, where it estimates the chance of AV failures in these scenarios at each iteration.
We show that CLIC surpasses other curriculum-based training strategies, showing substantial improvement in managing risky scenarios.
arXiv Detail & Related papers (2023-09-25T15:14:54Z) - Rethinking Closed-loop Training for Autonomous Driving [82.61418945804544]
We present the first empirical study which analyzes the effects of different training benchmark designs on the success of learning agents.
We propose trajectory value learning (TRAVL), an RL-based driving agent that performs planning with multistep look-ahead.
Our experiments show that TRAVL can learn much faster and produce safer maneuvers compared to all the baselines.
arXiv Detail & Related papers (2023-06-27T17:58:39Z) - Evaluating Model-free Reinforcement Learning toward Safety-critical
Tasks [70.76757529955577]
This paper revisits prior work in this scope from the perspective of state-wise safe RL.
We propose Unrolling Safety Layer (USL), a joint method that combines safety optimization and safety projection.
To facilitate further research in this area, we reproduce related algorithms in a unified pipeline and incorporate them into SafeRL-Kit.
arXiv Detail & Related papers (2022-12-12T06:30:17Z) - Learning from Demonstrations of Critical Driving Behaviours Using
Driver's Risk Field [4.272601420525791]
imitation learning (IL) has been widely used in industry as the core of autonomous vehicle (AV) planning modules.
Previous IL works show sample inefficiency and low generalisation in safety-critical scenarios, on which they are rarely tested.
We present an IL model using the spline coefficient parameterisation and offline expert queries to enhance safety and training efficiency.
arXiv Detail & Related papers (2022-10-04T17:07:35Z) - Curriculum Learning for Safe Mapless Navigation [71.55718344087657]
This work investigates the effects of Curriculum Learning (CL)-based approaches on the agent's performance.
In particular, we focus on the safety aspect of robotic mapless navigation, comparing over a standard end-to-end (E2E) training strategy.
arXiv Detail & Related papers (2021-12-23T12:30:36Z) - Benchmarking Safe Deep Reinforcement Learning in Aquatic Navigation [78.17108227614928]
We propose a benchmark environment for Safe Reinforcement Learning focusing on aquatic navigation.
We consider a value-based and policy-gradient Deep Reinforcement Learning (DRL)
We also propose a verification strategy that checks the behavior of the trained models over a set of desired properties.
arXiv Detail & Related papers (2021-12-16T16:53:56Z) - UMBRELLA: Uncertainty-Aware Model-Based Offline Reinforcement Learning
Leveraging Planning [1.1339580074756188]
Offline reinforcement learning (RL) provides a framework for learning decision-making from offline data.
Self-driving vehicles (SDV) learn a policy, which potentially even outperforms the behavior in the sub-optimal data set.
This motivates the use of model-based offline RL approaches, which leverage planning.
arXiv Detail & Related papers (2021-11-22T10:37:52Z) - Transferable Deep Reinforcement Learning Framework for Autonomous
Vehicles with Joint Radar-Data Communications [69.24726496448713]
We propose an intelligent optimization framework based on the Markov Decision Process (MDP) to help the AV make optimal decisions.
We then develop an effective learning algorithm leveraging recent advances of deep reinforcement learning techniques to find the optimal policy for the AV.
We show that the proposed transferable deep reinforcement learning framework reduces the obstacle miss detection probability by the AV up to 67% compared to other conventional deep reinforcement learning approaches.
arXiv Detail & Related papers (2021-05-28T08:45:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.