Racing Towards Reinforcement Learning based control of an Autonomous
Formula SAE Car
- URL: http://arxiv.org/abs/2308.13088v1
- Date: Thu, 24 Aug 2023 21:16:03 GMT
- Title: Racing Towards Reinforcement Learning based control of an Autonomous
Formula SAE Car
- Authors: Aakaash Salvaji, Harry Taylor, David Valencia, Trevor Gee, Henry
Williams
- Abstract summary: This paper presents the initial investigation into utilising Deep Reinforcement Learning (RL) for end-to-end control of an autonomous FS race car.
We train two state-of-the-art RL algorithms in simulation on tracks analogous to the full-scale design on a Turtlebot2 platform.
The results demonstrate that our approach can successfully learn to race in simulation and then transfer to a real-world racetrack on the physical platform.
- Score: 1.0124625066746598
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: With the rising popularity of autonomous navigation research, Formula Student
(FS) events are introducing a Driverless Vehicle (DV) category to their event
list. This paper presents the initial investigation into utilising Deep
Reinforcement Learning (RL) for end-to-end control of an autonomous FS race car
for these competitions. We train two state-of-the-art RL algorithms in
simulation on tracks analogous to the full-scale design on a Turtlebot2
platform. The results demonstrate that our approach can successfully learn to
race in simulation and then transfer to a real-world racetrack on the physical
platform. Finally, we provide insights into the limitations of the presented
approach and guidance into the future directions for applying RL toward
full-scale autonomous FS racing.
Related papers
- A Simulation Benchmark for Autonomous Racing with Large-Scale Human Data [12.804541200469536]
This paper proposes a racing simulation platform based on the simulator Assetto Corsa to test, validate, and benchmark autonomous driving algorithms.
Our contributions include the development of this simulation platform, several state-of-the-art algorithms tailored to the racing environment, and a comprehensive dataset collected from human drivers.
arXiv Detail & Related papers (2024-07-23T17:45:16Z) - Deep Reinforcement Learning for Local Path Following of an Autonomous
Formula SAE Vehicle [0.36868085124383626]
This paper presents the use of Deep Reinforcement Learning (DRL) and Inverse Reinforcement Learning (IRL) to map locally-observed cone positions to a desired steering angle for race track following.
Tests performed in simulation and the real world suggest that both algorithms can successfully train models for local path following.
arXiv Detail & Related papers (2024-01-05T17:04:43Z) - er.autopilot 1.0: The Full Autonomous Stack for Oval Racing at High
Speeds [61.91756903900903]
The Indy Autonomous Challenge (IAC) brought together nine autonomous racing teams competing at unprecedented speed and in head-to-head scenario, using independently developed software on open-wheel racecars.
This paper presents the complete software architecture used by team TII EuroRacing (TII-ER), covering all the modules needed to avoid static obstacles, perform active overtakes and reach speeds above 75 m/s (270 km/h)
Overall results and the performance of each module are described, as well as the lessons learned during the first two events of the competition on oval tracks, where the team placed respectively second and third.
arXiv Detail & Related papers (2023-10-27T12:52:34Z) - Rethinking Closed-loop Training for Autonomous Driving [82.61418945804544]
We present the first empirical study which analyzes the effects of different training benchmark designs on the success of learning agents.
We propose trajectory value learning (TRAVL), an RL-based driving agent that performs planning with multistep look-ahead.
Our experiments show that TRAVL can learn much faster and produce safer maneuvers compared to all the baselines.
arXiv Detail & Related papers (2023-06-27T17:58:39Z) - Vehicle Dynamics Modeling for Autonomous Racing Using Gaussian Processes [0.0]
This paper presents the most detailed analysis of the applicability of GP models for approximating vehicle dynamics for autonomous racing.
We construct dynamic, and extended kinematic models for the popular F1TENTH racing platform.
arXiv Detail & Related papers (2023-06-06T04:53:06Z) - FastRLAP: A System for Learning High-Speed Driving via Deep RL and
Autonomous Practicing [71.76084256567599]
We present a system that enables an autonomous small-scale RC car to drive aggressively from visual observations using reinforcement learning (RL)
Our system, FastRLAP (faster lap), trains autonomously in the real world, without human interventions, and without requiring any simulation or expert demonstrations.
The resulting policies exhibit emergent aggressive driving skills, such as timing braking and acceleration around turns and avoiding areas which impede the robot's motion, approaching the performance of a human driver using a similar first-person interface over the course of training.
arXiv Detail & Related papers (2023-04-19T17:33:47Z) - Tackling Real-World Autonomous Driving using Deep Reinforcement Learning [63.3756530844707]
In this work, we propose a model-free Deep Reinforcement Learning Planner training a neural network that predicts acceleration and steering angle.
In order to deploy the system on board the real self-driving car, we also develop a module represented by a tiny neural network.
arXiv Detail & Related papers (2022-07-05T16:33:20Z) - Indy Autonomous Challenge -- Autonomous Race Cars at the Handling Limits [81.22616193933021]
The team TUM Auton-omous Motorsports will participate in the Indy Autonomous Challenge in Octo-ber 2021.
It will benchmark its self-driving software-stack by racing one out of ten autonomous Dallara AV-21 racecars at the Indianapolis Motor Speedway.
It is an ideal testing ground for the development of autonomous driving algorithms capable of mastering the most challenging and rare situations.
arXiv Detail & Related papers (2022-02-08T11:55:05Z) - Vision-Based Autonomous Car Racing Using Deep Imitative Reinforcement
Learning [13.699336307578488]
Deep imitative reinforcement learning approach (DIRL) achieves agile autonomous racing using visual inputs.
We validate our algorithm both in a high-fidelity driving simulation and on a real-world 1/20-scale RC-car with limited onboard computation.
arXiv Detail & Related papers (2021-07-18T00:00:48Z) - Formula RL: Deep Reinforcement Learning for Autonomous Racing using
Telemetry Data [4.042350304426975]
We frame the problem as a reinforcement learning task with a multidimensional input consisting of the vehicle telemetry, and a continuous action space.
We put 10 variants of deep deterministic policy gradient (DDPG) to race in two experiments.
Our studies show that models trained with RL are not only able to drive faster than the baseline open source handcrafted bots but also generalize to unknown tracks.
arXiv Detail & Related papers (2021-04-22T14:40:12Z) - A Software Architecture for Autonomous Vehicles: Team LRM-B Entry in the
First CARLA Autonomous Driving Challenge [49.976633450740145]
This paper presents the architecture design for the navigation of an autonomous vehicle in a simulated urban environment.
Our architecture was made towards meeting the requirements of CARLA Autonomous Driving Challenge.
arXiv Detail & Related papers (2020-10-23T18:07:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.