Automated Lane Change Strategy using Proximal Policy Optimization-based
Deep Reinforcement Learning
- URL: http://arxiv.org/abs/2002.02667v2
- Date: Wed, 20 May 2020 23:22:20 GMT
- Title: Automated Lane Change Strategy using Proximal Policy Optimization-based
Deep Reinforcement Learning
- Authors: Fei Ye, Xuxin Cheng, Pin Wang, Ching-Yao Chan, Jiucai Zhang
- Abstract summary: Lane-change maneuvers are commonly executed by drivers to follow a certain routing plan, overtake a slower vehicle, adapt to a merging lane ahead, etc.
In this study, we propose an automated lane change strategy using proximal policy optimization-based deep reinforcement learning.
The trained agent is able to learn a smooth, safe, and efficient driving policy to make lane-change decisions.
- Score: 10.909595997847443
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Lane-change maneuvers are commonly executed by drivers to follow a certain
routing plan, overtake a slower vehicle, adapt to a merging lane ahead, etc.
However, improper lane change behaviors can be a major cause of traffic flow
disruptions and even crashes. While many rule-based methods have been proposed
to solve lane change problems for autonomous driving, they tend to exhibit
limited performance due to the uncertainty and complexity of the driving
environment. Machine learning-based methods offer an alternative approach, as
Deep reinforcement learning (DRL) has shown promising success in many
application domains including robotic manipulation, navigation, and playing
video games. However, applying DRL to autonomous driving still faces many
practical challenges in terms of slow learning rates, sample inefficiency, and
safety concerns. In this study, we propose an automated lane change strategy
using proximal policy optimization-based deep reinforcement learning, which
shows great advantages in learning efficiency while still maintaining stable
performance. The trained agent is able to learn a smooth, safe, and efficient
driving policy to make lane-change decisions (i.e. when and how) in a
challenging situation such as dense traffic scenarios. The effectiveness of the
proposed policy is validated by using metrics of task success rate and
collision rate. The simulation results demonstrate the lane change maneuvers
can be efficiently learned and executed in a safe, smooth, and efficient
manner.
Related papers
- RACER: Epistemic Risk-Sensitive RL Enables Fast Driving with Fewer Crashes [57.319845580050924]
We propose a reinforcement learning framework that combines risk-sensitive control with an adaptive action space curriculum.
We show that our algorithm is capable of learning high-speed policies for a real-world off-road driving task.
arXiv Detail & Related papers (2024-05-07T23:32:36Z) - DRNet: A Decision-Making Method for Autonomous Lane Changingwith Deep
Reinforcement Learning [7.2282857478457805]
"DRNet" is a novel DRL-based framework that enables a DRL agent to learn to drive by executing reasonable lane changing on simulated highways.
Our DRL agent has the ability to learn the desired task without causing collisions and outperforms DDQN and other baseline models.
arXiv Detail & Related papers (2023-11-02T21:17:52Z) - Robust Driving Policy Learning with Guided Meta Reinforcement Learning [49.860391298275616]
We introduce an efficient method to train diverse driving policies for social vehicles as a single meta-policy.
By randomizing the interaction-based reward functions of social vehicles, we can generate diverse objectives and efficiently train the meta-policy.
We propose a training strategy to enhance the robustness of the ego vehicle's driving policy using the environment where social vehicles are controlled by the learned meta-policy.
arXiv Detail & Related papers (2023-07-19T17:42:36Z) - Comprehensive Training and Evaluation on Deep Reinforcement Learning for
Automated Driving in Various Simulated Driving Maneuvers [0.4241054493737716]
This study implements, evaluating, and comparing the two DRL algorithms, Deep Q-networks (DQN) and Trust Region Policy Optimization (TRPO)
Models trained on the designed ComplexRoads environment can adapt well to other driving maneuvers with promising overall performance.
arXiv Detail & Related papers (2023-06-20T11:41:01Z) - Efficient Reinforcement Learning for Autonomous Driving with
Parameterized Skills and Priors [16.87227671645374]
ASAP-RL is an efficient reinforcement learning algorithm for autonomous driving.
A skill parameter inverse recovery method is proposed to convert expert demonstrations from control space to skill space.
We validate our proposed method on interactive dense-traffic driving tasks given simple and sparse rewards.
arXiv Detail & Related papers (2023-05-08T01:39:35Z) - Imitation Is Not Enough: Robustifying Imitation with Reinforcement
Learning for Challenging Driving Scenarios [147.16925581385576]
We show how imitation learning combined with reinforcement learning can substantially improve the safety and reliability of driving policies.
We train a policy on over 100k miles of urban driving data, and measure its effectiveness in test scenarios grouped by different levels of collision likelihood.
arXiv Detail & Related papers (2022-12-21T23:59:33Z) - Unified Automatic Control of Vehicular Systems with Reinforcement
Learning [64.63619662693068]
This article contributes a streamlined methodology for vehicular microsimulation.
It discovers high performance control strategies with minimal manual design.
The study reveals numerous emergent behaviors resembling wave mitigation, traffic signaling, and ramp metering.
arXiv Detail & Related papers (2022-07-30T16:23:45Z) - Learning energy-efficient driving behaviors by imitating experts [75.12960180185105]
This paper examines the role of imitation learning in bridging the gap between control strategies and realistic limitations in communication and sensing.
We show that imitation learning can succeed in deriving policies that, if adopted by 5% of vehicles, may boost the energy-efficiency of networks with varying traffic conditions by 15% using only local observations.
arXiv Detail & Related papers (2022-06-28T17:08:31Z) - Learning Interactive Driving Policies via Data-driven Simulation [125.97811179463542]
Data-driven simulators promise high data-efficiency for driving policy learning.
Small underlying datasets often lack interesting and challenging edge cases for learning interactive driving.
We propose a simulation method that uses in-painted ado vehicles for learning robust driving policies.
arXiv Detail & Related papers (2021-11-23T20:14:02Z) - Quick Learner Automated Vehicle Adapting its Roadmanship to Varying
Traffic Cultures with Meta Reinforcement Learning [15.570621284198017]
We develop Meta Reinforcement Learning (MRL) driving policies to showcase their quick learning capability.
Two types of distribution variation in environments were designed and simulated to validate the fast adaptation capability of resulting MRL driving policies.
arXiv Detail & Related papers (2021-04-18T15:04:37Z) - Decision-making for Autonomous Vehicles on Highway: Deep Reinforcement
Learning with Continuous Action Horizon [14.059728921828938]
This paper utilizes the deep reinforcement learning (DRL) method to address the continuous-horizon decision-making problem on the highway.
The running objective of the ego automated vehicle is to execute an efficient and smooth policy without collision.
The PPO-DRL-based decision-making strategy is estimated from multiple perspectives, including the optimality, learning efficiency, and adaptability.
arXiv Detail & Related papers (2020-08-26T22:49:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.