Predictive Maneuver Planning with Deep Reinforcement Learning (PMP-DRL)
for comfortable and safe autonomous driving
- URL: http://arxiv.org/abs/2306.09055v1
- Date: Thu, 15 Jun 2023 11:27:30 GMT
- Title: Predictive Maneuver Planning with Deep Reinforcement Learning (PMP-DRL)
for comfortable and safe autonomous driving
- Authors: Jayabrata Chowdhury, Vishruth Veerendranath, Suresh Sundaram,
Narasimhan Sundararajan
- Abstract summary: This paper presents a Predictive Maneuver Planning with Deep Reinforcement Learning (PMP-DRL) model for maneuver planning.
By learning from its experience, a Reinforcement Learning (RL)-based driving agent can adapt to changing driving conditions.
The results clearly show that PMP-DRL can handle complex real-world scenarios and make better comfortable and safe maneuver decisions than rule-based and imitative imitative.
- Score: 7.3045725197814875
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents a Predictive Maneuver Planning with Deep Reinforcement
Learning (PMP-DRL) model for maneuver planning. Traditional rule-based maneuver
planning approaches often have to improve their abilities to handle the
variabilities of real-world driving scenarios. By learning from its experience,
a Reinforcement Learning (RL)-based driving agent can adapt to changing driving
conditions and improve its performance over time. Our proposed approach
combines a predictive model and an RL agent to plan for comfortable and safe
maneuvers. The predictive model is trained using historical driving data to
predict the future positions of other surrounding vehicles. The surrounding
vehicles' past and predicted future positions are embedded in context-aware
grid maps. At the same time, the RL agent learns to make maneuvers based on
this spatio-temporal context information. Performance evaluation of PMP-DRL has
been carried out using simulated environments generated from publicly available
NGSIM US101 and I80 datasets. The training sequence shows the continuous
improvement in the driving experiences. It shows that proposed PMP-DRL can
learn the trade-off between safety and comfortability. The decisions generated
by the recent imitation learning-based model are compared with the proposed
PMP-DRL for unseen scenarios. The results clearly show that PMP-DRL can handle
complex real-world scenarios and make better comfortable and safe maneuver
decisions than rule-based and imitative models.
Related papers
- MetaFollower: Adaptable Personalized Autonomous Car Following [63.90050686330677]
We propose an adaptable personalized car-following framework - MetaFollower.
We first utilize Model-Agnostic Meta-Learning (MAML) to extract common driving knowledge from various CF events.
We additionally combine Long Short-Term Memory (LSTM) and Intelligent Driver Model (IDM) to reflect temporal heterogeneity with high interpretability.
arXiv Detail & Related papers (2024-06-23T15:30:40Z) - Planning with Adaptive World Models for Autonomous Driving [50.4439896514353]
Motion planners (MPs) are crucial for safe navigation in complex urban environments.
nuPlan, a recently released MP benchmark, addresses this limitation by augmenting real-world driving logs with closed-loop simulation logic.
We present AdaptiveDriver, a model-predictive control (MPC) based planner that unrolls different world models conditioned on BehaviorNet's predictions.
arXiv Detail & Related papers (2024-06-15T18:53:45Z) - HighwayLLM: Decision-Making and Navigation in Highway Driving with RL-Informed Language Model [5.4854443795779355]
This study presents a novel approach, HighwayLLM, which harnesses the reasoning capabilities of large language models (LLMs) to predict the future waypoints for ego-vehicle's navigation.
Our approach also utilizes a pre-trained Reinforcement Learning (RL) model to serve as a high-level planner, making decisions on appropriate meta-level actions.
arXiv Detail & Related papers (2024-05-22T11:32:37Z) - Data-efficient Deep Reinforcement Learning for Vehicle Trajectory
Control [6.144517901919656]
Reinforcement learning (RL) promises to achieve control performance superior to classical approaches.
Standard RL approaches like soft-actor critic (SAC) require extensive amounts of training data to be collected.
We apply recently developed data-efficient deep RL methods to vehicle trajectory control.
arXiv Detail & Related papers (2023-11-30T09:38:59Z) - Empowering Autonomous Driving with Large Language Models: A Safety Perspective [82.90376711290808]
This paper explores the integration of Large Language Models (LLMs) into Autonomous Driving systems.
LLMs are intelligent decision-makers in behavioral planning, augmented with a safety verifier shield for contextual safety learning.
We present two key studies in a simulated environment: an adaptive LLM-conditioned Model Predictive Control (MPC) and an LLM-enabled interactive behavior planning scheme with a state machine.
arXiv Detail & Related papers (2023-11-28T03:13:09Z) - Data-Efficient Task Generalization via Probabilistic Model-based Meta
Reinforcement Learning [58.575939354953526]
PACOH-RL is a novel model-based Meta-Reinforcement Learning (Meta-RL) algorithm designed to efficiently adapt control policies to changing dynamics.
Existing Meta-RL methods require abundant meta-learning data, limiting their applicability in settings such as robotics.
Our experiment results demonstrate that PACOH-RL outperforms model-based RL and model-based Meta-RL baselines in adapting to new dynamic conditions.
arXiv Detail & Related papers (2023-11-13T18:51:57Z) - Action and Trajectory Planning for Urban Autonomous Driving with
Hierarchical Reinforcement Learning [1.3397650653650457]
We propose an action and trajectory planner using Hierarchical Reinforcement Learning (atHRL) method.
We empirically verify the efficacy of atHRL through extensive experiments in complex urban driving scenarios.
arXiv Detail & Related papers (2023-06-28T07:11:02Z) - Rethinking Closed-loop Training for Autonomous Driving [82.61418945804544]
We present the first empirical study which analyzes the effects of different training benchmark designs on the success of learning agents.
We propose trajectory value learning (TRAVL), an RL-based driving agent that performs planning with multistep look-ahead.
Our experiments show that TRAVL can learn much faster and produce safer maneuvers compared to all the baselines.
arXiv Detail & Related papers (2023-06-27T17:58:39Z) - Predictable MDP Abstraction for Unsupervised Model-Based RL [93.91375268580806]
We propose predictable MDP abstraction (PMA)
Instead of training a predictive model on the original MDP, we train a model on a transformed MDP with a learned action space.
We theoretically analyze PMA and empirically demonstrate that PMA leads to significant improvements over prior unsupervised model-based RL approaches.
arXiv Detail & Related papers (2023-02-08T07:37:51Z) - UMBRELLA: Uncertainty-Aware Model-Based Offline Reinforcement Learning
Leveraging Planning [1.1339580074756188]
Offline reinforcement learning (RL) provides a framework for learning decision-making from offline data.
Self-driving vehicles (SDV) learn a policy, which potentially even outperforms the behavior in the sub-optimal data set.
This motivates the use of model-based offline RL approaches, which leverage planning.
arXiv Detail & Related papers (2021-11-22T10:37:52Z) - Improving the Exploration of Deep Reinforcement Learning in Continuous
Domains using Planning for Policy Search [6.088695984060244]
We propose to integrate a kinodynamic planner in the exploration strategy and to learn a control policy in an offline fashion from generated environment interactions.
We compare PPS with state-of-the-art D-RL methods in typical RL settings including underactuated systems.
This generates training data that helps PPS discover better policies.
arXiv Detail & Related papers (2020-10-24T20:19:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.