Safe, Efficient, Comfort, and Energy-saving Automated Driving through
Roundabout Based on Deep Reinforcement Learning
- URL: http://arxiv.org/abs/2306.11465v1
- Date: Tue, 20 Jun 2023 11:39:55 GMT
- Title: Safe, Efficient, Comfort, and Energy-saving Automated Driving through
Roundabout Based on Deep Reinforcement Learning
- Authors: Henan Yuan, Penghui Li, Bart van Arem, Liujiang Kang, and Yongqi Dong
- Abstract summary: Traffic scenarios in roundabouts pose substantial complexity for automated driving.
This study explores, employs, and implements various DRL algorithms to instruct automated vehicles' driving through roundabouts.
All three tested DRL algorithms succeed in enabling automated vehicles to drive through the roundabout.
- Score: 3.4602940992970903
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Traffic scenarios in roundabouts pose substantial complexity for automated
driving. Manually mapping all possible scenarios into a state space is
labor-intensive and challenging. Deep reinforcement learning (DRL) with its
ability to learn from interacting with the environment emerges as a promising
solution for training such automated driving models. This study explores,
employs, and implements various DRL algorithms, namely Deep Deterministic
Policy Gradient (DDPG), Proximal Policy Optimization (PPO), and Trust Region
Policy Optimization (TRPO) to instruct automated vehicles' driving through
roundabouts. The driving state space, action space, and reward function are
designed. The reward function considers safety, efficiency, comfort, and energy
consumption to align with real-world requirements. All three tested DRL
algorithms succeed in enabling automated vehicles to drive through the
roundabout. To holistically evaluate the performance of these algorithms, this
study establishes an evaluation methodology considering multiple indicators
such as safety, efficiency, and comfort level. A method employing the Analytic
Hierarchy Process is also developed to weigh these evaluation indicators.
Experimental results on various testing scenarios reveal that the TRPO
algorithm outperforms DDPG and PPO in terms of safety and efficiency, and PPO
performs best in terms of comfort level. Lastly, to verify the model's
adaptability and robustness regarding other driving scenarios, this study also
deploys the model trained by TRPO to a range of different testing scenarios,
e.g., highway driving and merging. Experimental results demonstrate that the
TRPO model trained on only roundabout driving scenarios exhibits a certain
degree of proficiency in highway driving and merging scenarios. This study
provides a foundation for the application of automated driving with DRL in real
traffic environments.
Related papers
- Aquatic Navigation: A Challenging Benchmark for Deep Reinforcement Learning [53.3760591018817]
We propose a new benchmarking environment for aquatic navigation using recent advances in the integration between game engines and Deep Reinforcement Learning.
Specifically, we focus on PPO, one of the most widely accepted algorithms, and we propose advanced training techniques.
Our empirical evaluation shows that a well-designed combination of these ingredients can achieve promising results.
arXiv Detail & Related papers (2024-05-30T23:20:23Z) - DRNet: A Decision-Making Method for Autonomous Lane Changingwith Deep
Reinforcement Learning [7.2282857478457805]
"DRNet" is a novel DRL-based framework that enables a DRL agent to learn to drive by executing reasonable lane changing on simulated highways.
Our DRL agent has the ability to learn the desired task without causing collisions and outperforms DDQN and other baseline models.
arXiv Detail & Related papers (2023-11-02T21:17:52Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - Action and Trajectory Planning for Urban Autonomous Driving with
Hierarchical Reinforcement Learning [1.3397650653650457]
We propose an action and trajectory planner using Hierarchical Reinforcement Learning (atHRL) method.
We empirically verify the efficacy of atHRL through extensive experiments in complex urban driving scenarios.
arXiv Detail & Related papers (2023-06-28T07:11:02Z) - Comprehensive Training and Evaluation on Deep Reinforcement Learning for
Automated Driving in Various Simulated Driving Maneuvers [0.4241054493737716]
This study implements, evaluating, and comparing the two DRL algorithms, Deep Q-networks (DQN) and Trust Region Policy Optimization (TRPO)
Models trained on the designed ComplexRoads environment can adapt well to other driving maneuvers with promising overall performance.
arXiv Detail & Related papers (2023-06-20T11:41:01Z) - Unified Automatic Control of Vehicular Systems with Reinforcement
Learning [64.63619662693068]
This article contributes a streamlined methodology for vehicular microsimulation.
It discovers high performance control strategies with minimal manual design.
The study reveals numerous emergent behaviors resembling wave mitigation, traffic signaling, and ramp metering.
arXiv Detail & Related papers (2022-07-30T16:23:45Z) - Learning to drive from a world on rails [78.28647825246472]
We learn an interactive vision-based driving policy from pre-recorded driving logs via a model-based approach.
A forward model of the world supervises a driving policy that predicts the outcome of any potential driving trajectory.
Our method ranks first on the CARLA leaderboard, attaining a 25% higher driving score while using 40 times less data.
arXiv Detail & Related papers (2021-05-03T05:55:30Z) - Real-world Ride-hailing Vehicle Repositioning using Deep Reinforcement
Learning [52.2663102239029]
We present a new practical framework based on deep reinforcement learning and decision-time planning for real-world vehicle on idle-hailing platforms.
Our approach learns ride-based state-value function using a batch training algorithm with deep value.
We benchmark our algorithm with baselines in a ride-hailing simulation environment to demonstrate its superiority in improving income efficiency.
arXiv Detail & Related papers (2021-03-08T05:34:05Z) - Decision-making for Autonomous Vehicles on Highway: Deep Reinforcement
Learning with Continuous Action Horizon [14.059728921828938]
This paper utilizes the deep reinforcement learning (DRL) method to address the continuous-horizon decision-making problem on the highway.
The running objective of the ego automated vehicle is to execute an efficient and smooth policy without collision.
The PPO-DRL-based decision-making strategy is estimated from multiple perspectives, including the optimality, learning efficiency, and adaptability.
arXiv Detail & Related papers (2020-08-26T22:49:27Z) - Decision-making Strategy on Highway for Autonomous Vehicles using Deep
Reinforcement Learning [6.298084785377199]
A deep reinforcement learning (DRL)-enabled decision-making policy is constructed for autonomous vehicles to address the overtaking behaviors on the highway.
A hierarchical control framework is presented to control these vehicles, which indicates the upper-level manages the driving decisions.
The DDQN-based overtaking policy could accomplish highway driving tasks efficiently and safely.
arXiv Detail & Related papers (2020-07-16T23:41:48Z) - Intelligent Roundabout Insertion using Deep Reinforcement Learning [68.8204255655161]
We present a maneuver planning module able to negotiate the entering in busy roundabouts.
The proposed module is based on a neural network trained to predict when and how entering the roundabout throughout the whole duration of the maneuver.
arXiv Detail & Related papers (2020-01-03T11:16:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.