Formula RL: Deep Reinforcement Learning for Autonomous Racing using
Telemetry Data
- URL: http://arxiv.org/abs/2104.11106v1
- Date: Thu, 22 Apr 2021 14:40:12 GMT
- Title: Formula RL: Deep Reinforcement Learning for Autonomous Racing using
Telemetry Data
- Authors: Adrian Remonda, Sarah Krebs, Eduardo Veas, Granit Luzhnica, Roman Kern
- Abstract summary: We frame the problem as a reinforcement learning task with a multidimensional input consisting of the vehicle telemetry, and a continuous action space.
We put 10 variants of deep deterministic policy gradient (DDPG) to race in two experiments.
Our studies show that models trained with RL are not only able to drive faster than the baseline open source handcrafted bots but also generalize to unknown tracks.
- Score: 4.042350304426975
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper explores the use of reinforcement learning (RL) models for
autonomous racing. In contrast to passenger cars, where safety is the top
priority, a racing car aims to minimize the lap-time. We frame the problem as a
reinforcement learning task with a multidimensional input consisting of the
vehicle telemetry, and a continuous action space. To find out which RL methods
better solve the problem and whether the obtained models generalize to driving
on unknown tracks, we put 10 variants of deep deterministic policy gradient
(DDPG) to race in two experiments: i)~studying how RL methods learn to drive a
racing car and ii)~studying how the learning scenario influences the capability
of the models to generalize. Our studies show that models trained with RL are
not only able to drive faster than the baseline open source handcrafted bots
but also generalize to unknown tracks.
Related papers
- CIMRL: Combining IMitation and Reinforcement Learning for Safe Autonomous Driving [45.05135725542318]
IMitation and Reinforcement Learning (CIMRL) approach enables training driving policies in simulation through leveraging imitative motion priors and safety constraints.
By combining RL and imitation, we demonstrate our method achieves state-of-the-art results in closed loop simulation and real world driving benchmarks.
arXiv Detail & Related papers (2024-06-13T07:31:29Z) - Demystifying the Physics of Deep Reinforcement Learning-Based Autonomous Vehicle Decision-Making [6.243971093896272]
We use a continuous proximal policy optimization-based DRL algorithm as the baseline model and add a multi-head attention framework in an open-source AV simulation environment.
We show that the weights in the first head encode the positions of the neighboring vehicles while the second head focuses on the leader vehicle exclusively.
arXiv Detail & Related papers (2024-03-18T02:59:13Z) - Racing Towards Reinforcement Learning based control of an Autonomous
Formula SAE Car [1.0124625066746598]
This paper presents the initial investigation into utilising Deep Reinforcement Learning (RL) for end-to-end control of an autonomous FS race car.
We train two state-of-the-art RL algorithms in simulation on tracks analogous to the full-scale design on a Turtlebot2 platform.
The results demonstrate that our approach can successfully learn to race in simulation and then transfer to a real-world racetrack on the physical platform.
arXiv Detail & Related papers (2023-08-24T21:16:03Z) - Rethinking Closed-loop Training for Autonomous Driving [82.61418945804544]
We present the first empirical study which analyzes the effects of different training benchmark designs on the success of learning agents.
We propose trajectory value learning (TRAVL), an RL-based driving agent that performs planning with multistep look-ahead.
Our experiments show that TRAVL can learn much faster and produce safer maneuvers compared to all the baselines.
arXiv Detail & Related papers (2023-06-27T17:58:39Z) - Vehicle Dynamics Modeling for Autonomous Racing Using Gaussian Processes [0.0]
This paper presents the most detailed analysis of the applicability of GP models for approximating vehicle dynamics for autonomous racing.
We construct dynamic, and extended kinematic models for the popular F1TENTH racing platform.
arXiv Detail & Related papers (2023-06-06T04:53:06Z) - FastRLAP: A System for Learning High-Speed Driving via Deep RL and
Autonomous Practicing [71.76084256567599]
We present a system that enables an autonomous small-scale RC car to drive aggressively from visual observations using reinforcement learning (RL)
Our system, FastRLAP (faster lap), trains autonomously in the real world, without human interventions, and without requiring any simulation or expert demonstrations.
The resulting policies exhibit emergent aggressive driving skills, such as timing braking and acceleration around turns and avoiding areas which impede the robot's motion, approaching the performance of a human driver using a similar first-person interface over the course of training.
arXiv Detail & Related papers (2023-04-19T17:33:47Z) - Automated Reinforcement Learning (AutoRL): A Survey and Open Problems [92.73407630874841]
Automated Reinforcement Learning (AutoRL) involves not only standard applications of AutoML but also includes additional challenges unique to RL.
We provide a common taxonomy, discuss each area in detail and pose open problems which would be of interest to researchers going forward.
arXiv Detail & Related papers (2022-01-11T12:41:43Z) - RvS: What is Essential for Offline RL via Supervised Learning? [77.91045677562802]
Recent work has shown that supervised learning alone, without temporal difference (TD) learning, can be remarkably effective for offline RL.
In every environment suite we consider simply maximizing likelihood with two-layer feedforward is competitive.
They also probe the limits of existing RvS methods, which are comparatively weak on random data.
arXiv Detail & Related papers (2021-12-20T18:55:16Z) - DriverGym: Democratising Reinforcement Learning for Autonomous Driving [75.91049219123899]
We propose DriverGym, an open-source environment for developing reinforcement learning algorithms for autonomous driving.
DriverGym provides access to more than 1000 hours of expert logged data and also supports reactive and data-driven agent behavior.
The performance of an RL policy can be easily validated on real-world data using our extensive and flexible closed-loop evaluation protocol.
arXiv Detail & Related papers (2021-11-12T11:47:08Z) - Vision-Based Autonomous Car Racing Using Deep Imitative Reinforcement
Learning [13.699336307578488]
Deep imitative reinforcement learning approach (DIRL) achieves agile autonomous racing using visual inputs.
We validate our algorithm both in a high-fidelity driving simulation and on a real-world 1/20-scale RC-car with limited onboard computation.
arXiv Detail & Related papers (2021-07-18T00:00:48Z) - On the Theory of Reinforcement Learning with Once-per-Episode Feedback [120.5537226120512]
We introduce a theory of reinforcement learning in which the learner receives feedback only once at the end of an episode.
This is arguably more representative of real-world applications than the traditional requirement that the learner receive feedback at every time step.
arXiv Detail & Related papers (2021-05-29T19:48:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.