Comparative Study of Q-Learning and NeuroEvolution of Augmenting
Topologies for Self Driving Agents
- URL: http://arxiv.org/abs/2209.09007v1
- Date: Mon, 19 Sep 2022 13:34:18 GMT
- Title: Comparative Study of Q-Learning and NeuroEvolution of Augmenting
Topologies for Self Driving Agents
- Authors: Arhum Ishtiaq, Maheen Anees, Sara Mahmood, Neha Jafry
- Abstract summary: It is expected that autonomous driving can reduce the number of driving accidents around the world.
We will focus reinforcement learning algorithms and NeuroEvolution of Augment Topologies (NEAT), a combination of evolutionary algorithms and artificial neural networks, to train a model agent to learn how to drive on a given path.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Autonomous driving vehicles have been of keen interest ever since automation
of various tasks started. Humans are prone to exhaustion and have a slow
response time on the road, and on top of that driving is already quite a
dangerous task with around 1.35 million road traffic incident deaths each year.
It is expected that autonomous driving can reduce the number of driving
accidents around the world which is why this problem has been of keen interest
for researchers. Currently, self-driving vehicles use different algorithms for
various sub-problems in making the vehicle autonomous. We will focus
reinforcement learning algorithms, more specifically Q-learning algorithms and
NeuroEvolution of Augment Topologies (NEAT), a combination of evolutionary
algorithms and artificial neural networks, to train a model agent to learn how
to drive on a given path. This paper will focus on drawing a comparison between
the two aforementioned algorithms.
Related papers
- Autonomous Algorithm for Training Autonomous Vehicles with Minimal Human Intervention [18.95571506577409]
We introduce a novel algorithm to train an autonomous vehicle with minimal human intervention.
Our algorithm takes into account the learning progress of the autonomous vehicle to determine when to abort episodes.
We also take advantage of rule-based autonomous driving algorithms to safely reset an autonomous vehicle to an initial state.
arXiv Detail & Related papers (2024-05-22T05:04:44Z) - Work-in-Progress: Crash Course: Can (Under Attack) Autonomous Driving Beat Human Drivers? [60.51287814584477]
This paper evaluates the inherent risks in autonomous driving by examining the current landscape of AVs.
We develop specific claims highlighting the delicate balance between the advantages of AVs and potential security challenges in real-world scenarios.
arXiv Detail & Related papers (2024-05-14T09:42:21Z) - Comprehensive Training and Evaluation on Deep Reinforcement Learning for
Automated Driving in Various Simulated Driving Maneuvers [0.4241054493737716]
This study implements, evaluating, and comparing the two DRL algorithms, Deep Q-networks (DQN) and Trust Region Policy Optimization (TRPO)
Models trained on the designed ComplexRoads environment can adapt well to other driving maneuvers with promising overall performance.
arXiv Detail & Related papers (2023-06-20T11:41:01Z) - FastRLAP: A System for Learning High-Speed Driving via Deep RL and
Autonomous Practicing [71.76084256567599]
We present a system that enables an autonomous small-scale RC car to drive aggressively from visual observations using reinforcement learning (RL)
Our system, FastRLAP (faster lap), trains autonomously in the real world, without human interventions, and without requiring any simulation or expert demonstrations.
The resulting policies exhibit emergent aggressive driving skills, such as timing braking and acceleration around turns and avoiding areas which impede the robot's motion, approaching the performance of a human driver using a similar first-person interface over the course of training.
arXiv Detail & Related papers (2023-04-19T17:33:47Z) - Parallelized and Randomized Adversarial Imitation Learning for
Safety-Critical Self-Driving Vehicles [11.463476667274051]
It is essential to consider reliable ADAS function coordination to control the driving system, safely.
This paper proposes a randomized adversarial imitation learning (RAIL) algorithm.
The proposed method is able to train the decision maker that deals with the LIDAR data and controls the autonomous driving in multi-lane complex highway environments.
arXiv Detail & Related papers (2021-12-26T23:42:49Z) - Model-based Decision Making with Imagination for Autonomous Parking [50.41076449007115]
The proposed algorithm consists of three parts: an imaginative model for anticipating results before parking, an improved rapid-exploring random tree (RRT) and a path smoothing module.
Our algorithm is based on a real kinematic vehicle model; which makes it more suitable for algorithm application on real autonomous cars.
In order to evaluate the algorithm's effectiveness, we have compared our algorithm with traditional RRT, within three different parking scenarios.
arXiv Detail & Related papers (2021-08-25T18:24:34Z) - Improving Robustness of Learning-based Autonomous Steering Using
Adversarial Images [58.287120077778205]
We introduce a framework for analyzing robustness of the learning algorithm w.r.t varying quality in the image input for autonomous driving.
Using the results of sensitivity analysis, we propose an algorithm to improve the overall performance of the task of "learning to steer"
arXiv Detail & Related papers (2021-02-26T02:08:07Z) - Learning Accurate and Human-Like Driving using Semantic Maps and
Attention [152.48143666881418]
This paper investigates how end-to-end driving models can be improved to drive more accurately and human-like.
We exploit semantic and visual maps from HERE Technologies and augment the existing Drive360 dataset with such.
Our models are trained and evaluated on the Drive360 + HERE dataset, which features 60 hours and 3000 km of real-world driving data.
arXiv Detail & Related papers (2020-07-10T22:25:27Z) - Deep Reinforcement Learning for Human-Like Driving Policies in Collision
Avoidance Tasks of Self-Driving Cars [1.160208922584163]
We introduce a model-free, deep reinforcement learning approach to generate automated human-like driving policies.
We study a static obstacle avoidance task on a two-lane highway road in simulation.
We demonstrate that our approach leads to human-like driving policies.
arXiv Detail & Related papers (2020-06-07T18:20:33Z) - AutoML-Zero: Evolving Machine Learning Algorithms From Scratch [76.83052807776276]
We show that it is possible to automatically discover complete machine learning algorithms just using basic mathematical operations as building blocks.
We demonstrate this by introducing a novel framework that significantly reduces human bias through a generic search space.
We believe these preliminary successes in discovering machine learning algorithms from scratch indicate a promising new direction in the field.
arXiv Detail & Related papers (2020-03-06T19:00:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.