Autonomous Algorithm for Training Autonomous Vehicles with Minimal Human Intervention
- URL: http://arxiv.org/abs/2405.13345v1
- Date: Wed, 22 May 2024 05:04:44 GMT
- Title: Autonomous Algorithm for Training Autonomous Vehicles with Minimal Human Intervention
- Authors: Sang-Hyun Lee, Daehyeok Kwon, Seung-Woo Seo,
- Abstract summary: We introduce a novel algorithm to train an autonomous vehicle with minimal human intervention.
Our algorithm takes into account the learning progress of the autonomous vehicle to determine when to abort episodes.
We also take advantage of rule-based autonomous driving algorithms to safely reset an autonomous vehicle to an initial state.
- Score: 18.95571506577409
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Reinforcement learning (RL) provides a compelling framework for enabling autonomous vehicles to continue to learn and improve diverse driving behaviors on their own. However, training real-world autonomous vehicles with current RL algorithms presents several challenges. One critical challenge, often overlooked in these algorithms, is the need to reset a driving environment between every episode. While resetting an environment after each episode is trivial in simulated settings, it demands significant human intervention in the real world. In this paper, we introduce a novel autonomous algorithm that allows off-the-shelf RL algorithms to train an autonomous vehicle with minimal human intervention. Our algorithm takes into account the learning progress of the autonomous vehicle to determine when to abort episodes before it enters unsafe states and where to reset it for subsequent episodes in order to gather informative transitions. The learning progress is estimated based on the novelty of both current and future states. We also take advantage of rule-based autonomous driving algorithms to safely reset an autonomous vehicle to an initial state. We evaluate our algorithm against baselines on diverse urban driving tasks. The experimental results show that our algorithm is task-agnostic and achieves better driving performance with fewer manual resets than baselines.
Related papers
- Mitigating Covariate Shift in Imitation Learning for Autonomous Vehicles Using Latent Space Generative World Models [60.87795376541144]
A world model is a neural network capable of predicting an agent's next state given past states and actions.
During end-to-end training, our policy learns how to recover from errors by aligning with states observed in human demonstrations.
We present qualitative and quantitative results, demonstrating significant improvements upon prior state of the art in closed-loop testing.
arXiv Detail & Related papers (2024-09-25T06:48:25Z) - Autonomous Vehicle Controllers From End-to-End Differentiable Simulation [60.05963742334746]
We propose a differentiable simulator and design an analytic policy gradients (APG) approach to training AV controllers.
Our proposed framework brings the differentiable simulator into an end-to-end training loop, where gradients of environment dynamics serve as a useful prior to help the agent learn a more grounded policy.
We find significant improvements in performance and robustness to noise in the dynamics, as well as overall more intuitive human-like handling.
arXiv Detail & Related papers (2024-09-12T11:50:06Z) - Work-in-Progress: Crash Course: Can (Under Attack) Autonomous Driving Beat Human Drivers? [60.51287814584477]
This paper evaluates the inherent risks in autonomous driving by examining the current landscape of AVs.
We develop specific claims highlighting the delicate balance between the advantages of AVs and potential security challenges in real-world scenarios.
arXiv Detail & Related papers (2024-05-14T09:42:21Z) - Comprehensive Training and Evaluation on Deep Reinforcement Learning for
Automated Driving in Various Simulated Driving Maneuvers [0.4241054493737716]
This study implements, evaluating, and comparing the two DRL algorithms, Deep Q-networks (DQN) and Trust Region Policy Optimization (TRPO)
Models trained on the designed ComplexRoads environment can adapt well to other driving maneuvers with promising overall performance.
arXiv Detail & Related papers (2023-06-20T11:41:01Z) - FastRLAP: A System for Learning High-Speed Driving via Deep RL and
Autonomous Practicing [71.76084256567599]
We present a system that enables an autonomous small-scale RC car to drive aggressively from visual observations using reinforcement learning (RL)
Our system, FastRLAP (faster lap), trains autonomously in the real world, without human interventions, and without requiring any simulation or expert demonstrations.
The resulting policies exhibit emergent aggressive driving skills, such as timing braking and acceleration around turns and avoiding areas which impede the robot's motion, approaching the performance of a human driver using a similar first-person interface over the course of training.
arXiv Detail & Related papers (2023-04-19T17:33:47Z) - Comparative Study of Q-Learning and NeuroEvolution of Augmenting
Topologies for Self Driving Agents [0.0]
It is expected that autonomous driving can reduce the number of driving accidents around the world.
We will focus reinforcement learning algorithms and NeuroEvolution of Augment Topologies (NEAT), a combination of evolutionary algorithms and artificial neural networks, to train a model agent to learn how to drive on a given path.
arXiv Detail & Related papers (2022-09-19T13:34:18Z) - Tackling Real-World Autonomous Driving using Deep Reinforcement Learning [63.3756530844707]
In this work, we propose a model-free Deep Reinforcement Learning Planner training a neural network that predicts acceleration and steering angle.
In order to deploy the system on board the real self-driving car, we also develop a module represented by a tiny neural network.
arXiv Detail & Related papers (2022-07-05T16:33:20Z) - Model-based Decision Making with Imagination for Autonomous Parking [50.41076449007115]
The proposed algorithm consists of three parts: an imaginative model for anticipating results before parking, an improved rapid-exploring random tree (RRT) and a path smoothing module.
Our algorithm is based on a real kinematic vehicle model; which makes it more suitable for algorithm application on real autonomous cars.
In order to evaluate the algorithm's effectiveness, we have compared our algorithm with traditional RRT, within three different parking scenarios.
arXiv Detail & Related papers (2021-08-25T18:24:34Z) - Vision-Based Autonomous Car Racing Using Deep Imitative Reinforcement
Learning [13.699336307578488]
Deep imitative reinforcement learning approach (DIRL) achieves agile autonomous racing using visual inputs.
We validate our algorithm both in a high-fidelity driving simulation and on a real-world 1/20-scale RC-car with limited onboard computation.
arXiv Detail & Related papers (2021-07-18T00:00:48Z) - Reinforcement Learning Based Safe Decision Making for Highway Autonomous
Driving [1.995792341399967]
We develop a safe decision-making method for self-driving cars in a multi-lane, single-agent setting.
The proposed approach utilizes deep reinforcement learning to achieve a high-level policy for safe tactical decision-making.
arXiv Detail & Related papers (2021-05-13T19:17:30Z) - Efficient Sampling-Based Maximum Entropy Inverse Reinforcement Learning
with Application to Autonomous Driving [35.44498286245894]
We present an efficient sampling-based maximum-entropy inverse reinforcement learning (IRL) algorithm in this paper.
We evaluate the proposed algorithm on real driving data, including both non-interactive and interactive scenarios.
arXiv Detail & Related papers (2020-06-22T01:41:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.