Comprehensive Training and Evaluation on Deep Reinforcement Learning for
Automated Driving in Various Simulated Driving Maneuvers
- URL: http://arxiv.org/abs/2306.11466v2
- Date: Fri, 18 Aug 2023 05:58:31 GMT
- Title: Comprehensive Training and Evaluation on Deep Reinforcement Learning for
Automated Driving in Various Simulated Driving Maneuvers
- Authors: Yongqi Dong, Tobias Datema, Vincent Wassenaar, Joris van de Weg, Cahit
Tolga Kopar, and Harim Suleman
- Abstract summary: This study implements, evaluating, and comparing the two DRL algorithms, Deep Q-networks (DQN) and Trust Region Policy Optimization (TRPO)
Models trained on the designed ComplexRoads environment can adapt well to other driving maneuvers with promising overall performance.
- Score: 0.4241054493737716
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Developing and testing automated driving models in the real world might be
challenging and even dangerous, while simulation can help with this, especially
for challenging maneuvers. Deep reinforcement learning (DRL) has the potential
to tackle complex decision-making and controlling tasks through learning and
interacting with the environment, thus it is suitable for developing automated
driving while not being explored in detail yet. This study carried out a
comprehensive study by implementing, evaluating, and comparing the two DRL
algorithms, Deep Q-networks (DQN) and Trust Region Policy Optimization (TRPO),
for training automated driving on the highway-env simulation platform.
Effective and customized reward functions were developed and the implemented
algorithms were evaluated in terms of onlane accuracy (how well the car drives
on the road within the lane), efficiency (how fast the car drives), safety (how
likely the car is to crash into obstacles), and comfort (how much the car makes
jerks, e.g., suddenly accelerates or brakes). Results show that the TRPO-based
models with modified reward functions delivered the best performance in most
cases. Furthermore, to train a uniform driving model that can tackle various
driving maneuvers besides the specific ones, this study expanded the
highway-env and developed an extra customized training environment, namely,
ComplexRoads, integrating various driving maneuvers and multiple road scenarios
together. Models trained on the designed ComplexRoads environment can adapt
well to other driving maneuvers with promising overall performance. Lastly,
several functionalities were added to the highway-env to implement this work.
The codes are open on GitHub at https://github.com/alaineman/drlcarsim-paper.
Related papers
- Self-Driving Car Racing: Application of Deep Reinforcement Learning [0.0]
The project aims to develop an AI agent that efficiently drives a simulated car in the OpenAI Gymnasium CarRacing environment.
We investigate various RL algorithms, including Deep Q-Network (DQN), Proximal Policy Optimization (PPO), and novel adaptations that incorporate transfer learning and recurrent neural networks (RNNs) for enhanced performance.
arXiv Detail & Related papers (2024-10-30T07:32:25Z) - DRNet: A Decision-Making Method for Autonomous Lane Changingwith Deep
Reinforcement Learning [7.2282857478457805]
"DRNet" is a novel DRL-based framework that enables a DRL agent to learn to drive by executing reasonable lane changing on simulated highways.
Our DRL agent has the ability to learn the desired task without causing collisions and outperforms DDQN and other baseline models.
arXiv Detail & Related papers (2023-11-02T21:17:52Z) - Waymax: An Accelerated, Data-Driven Simulator for Large-Scale Autonomous
Driving Research [76.93956925360638]
Waymax is a new data-driven simulator for autonomous driving in multi-agent scenes.
It runs entirely on hardware accelerators such as TPUs/GPUs and supports in-graph simulation for training.
We benchmark a suite of popular imitation and reinforcement learning algorithms with ablation studies on different design decisions.
arXiv Detail & Related papers (2023-10-12T20:49:15Z) - Rethinking Closed-loop Training for Autonomous Driving [82.61418945804544]
We present the first empirical study which analyzes the effects of different training benchmark designs on the success of learning agents.
We propose trajectory value learning (TRAVL), an RL-based driving agent that performs planning with multistep look-ahead.
Our experiments show that TRAVL can learn much faster and produce safer maneuvers compared to all the baselines.
arXiv Detail & Related papers (2023-06-27T17:58:39Z) - Safe, Efficient, Comfort, and Energy-saving Automated Driving through
Roundabout Based on Deep Reinforcement Learning [3.4602940992970903]
Traffic scenarios in roundabouts pose substantial complexity for automated driving.
This study explores, employs, and implements various DRL algorithms to instruct automated vehicles' driving through roundabouts.
All three tested DRL algorithms succeed in enabling automated vehicles to drive through the roundabout.
arXiv Detail & Related papers (2023-06-20T11:39:55Z) - FastRLAP: A System for Learning High-Speed Driving via Deep RL and
Autonomous Practicing [71.76084256567599]
We present a system that enables an autonomous small-scale RC car to drive aggressively from visual observations using reinforcement learning (RL)
Our system, FastRLAP (faster lap), trains autonomously in the real world, without human interventions, and without requiring any simulation or expert demonstrations.
The resulting policies exhibit emergent aggressive driving skills, such as timing braking and acceleration around turns and avoiding areas which impede the robot's motion, approaching the performance of a human driver using a similar first-person interface over the course of training.
arXiv Detail & Related papers (2023-04-19T17:33:47Z) - Tackling Real-World Autonomous Driving using Deep Reinforcement Learning [63.3756530844707]
In this work, we propose a model-free Deep Reinforcement Learning Planner training a neural network that predicts acceleration and steering angle.
In order to deploy the system on board the real self-driving car, we also develop a module represented by a tiny neural network.
arXiv Detail & Related papers (2022-07-05T16:33:20Z) - COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked
Vehicles [54.61668577827041]
We introduce COOPERNAUT, an end-to-end learning model that uses cross-vehicle perception for vision-based cooperative driving.
Our experiments on AutoCastSim suggest that our cooperative perception driving models lead to a 40% improvement in average success rate.
arXiv Detail & Related papers (2022-05-04T17:55:12Z) - Vision-Based Autonomous Car Racing Using Deep Imitative Reinforcement
Learning [13.699336307578488]
Deep imitative reinforcement learning approach (DIRL) achieves agile autonomous racing using visual inputs.
We validate our algorithm both in a high-fidelity driving simulation and on a real-world 1/20-scale RC-car with limited onboard computation.
arXiv Detail & Related papers (2021-07-18T00:00:48Z) - Investigating Value of Curriculum Reinforcement Learning in Autonomous
Driving Under Diverse Road and Weather Conditions [0.0]
This paper focuses on investigating the value of curriculum reinforcement learning in autonomous driving applications.
We setup several different driving scenarios in a realistic driving simulator, with varying road complexity and weather conditions.
Results show that curriculum RL can yield significant gains in complex driving tasks, both in terms of driving performance and sample complexity.
arXiv Detail & Related papers (2021-03-14T12:05:05Z) - Intelligent Roundabout Insertion using Deep Reinforcement Learning [68.8204255655161]
We present a maneuver planning module able to negotiate the entering in busy roundabouts.
The proposed module is based on a neural network trained to predict when and how entering the roundabout throughout the whole duration of the maneuver.
arXiv Detail & Related papers (2020-01-03T11:16:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.