Autonomous Racing using a Hybrid Imitation-Reinforcement Learning
Architecture
- URL: http://arxiv.org/abs/2110.05437v1
- Date: Mon, 11 Oct 2021 17:26:55 GMT
- Title: Autonomous Racing using a Hybrid Imitation-Reinforcement Learning
Architecture
- Authors: Chinmay Vilas Samak, Tanmay Vilas Samak and Sivanathan Kandhasamy
- Abstract summary: We present an end-to-end control strategy for autonomous vehicles aimed at minimizing lap times in a time attack racing event.
We also introduce AutoRACE Simulator, which was employed to simulate accurate vehicular and environmental dynamics.
- Score: 0.5735035463793008
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we present a rigorous end-to-end control strategy for
autonomous vehicles aimed at minimizing lap times in a time attack racing
event. We also introduce AutoRACE Simulator developed as a part of this
research project, which was employed to simulate accurate vehicular and
environmental dynamics along with realistic audio-visual effects. We adopted a
hybrid imitation-reinforcement learning architecture and crafted a novel reward
function to train a deep neural network policy to drive (using imitation
learning) and race (using reinforcement learning) a car autonomously in less
than 20 hours. Deployment results were reported as a direct comparison of 10
autonomous laps against 100 manual laps by 10 different human players. The
autonomous agent not only exhibited superior performance by gaining 0.96
seconds over the best manual lap, but it also dominated the human players by
1.46 seconds with regard to the mean lap time. This dominance could be
justified in terms of better trajectory optimization and lower reaction time of
the autonomous agent.
Related papers
- er.autopilot 1.0: The Full Autonomous Stack for Oval Racing at High
Speeds [61.91756903900903]
The Indy Autonomous Challenge (IAC) brought together nine autonomous racing teams competing at unprecedented speed and in head-to-head scenario, using independently developed software on open-wheel racecars.
This paper presents the complete software architecture used by team TII EuroRacing (TII-ER), covering all the modules needed to avoid static obstacles, perform active overtakes and reach speeds above 75 m/s (270 km/h)
Overall results and the performance of each module are described, as well as the lessons learned during the first two events of the competition on oval tracks, where the team placed respectively second and third.
arXiv Detail & Related papers (2023-10-27T12:52:34Z) - Reaching the Limit in Autonomous Racing: Optimal Control versus
Reinforcement Learning [66.10854214036605]
A central question in robotics is how to design a control system for an agile mobile robot.
We show that a neural network controller trained with reinforcement learning (RL) outperformed optimal control (OC) methods in this setting.
Our findings allowed us to push an agile drone to its maximum performance, achieving a peak acceleration greater than 12 times the gravitational acceleration and a peak velocity of 108 kilometers per hour.
arXiv Detail & Related papers (2023-10-17T02:40:27Z) - Vehicle Dynamics Modeling for Autonomous Racing Using Gaussian Processes [0.0]
This paper presents the most detailed analysis of the applicability of GP models for approximating vehicle dynamics for autonomous racing.
We construct dynamic, and extended kinematic models for the popular F1TENTH racing platform.
arXiv Detail & Related papers (2023-06-06T04:53:06Z) - FastRLAP: A System for Learning High-Speed Driving via Deep RL and
Autonomous Practicing [71.76084256567599]
We present a system that enables an autonomous small-scale RC car to drive aggressively from visual observations using reinforcement learning (RL)
Our system, FastRLAP (faster lap), trains autonomously in the real world, without human interventions, and without requiring any simulation or expert demonstrations.
The resulting policies exhibit emergent aggressive driving skills, such as timing braking and acceleration around turns and avoiding areas which impede the robot's motion, approaching the performance of a human driver using a similar first-person interface over the course of training.
arXiv Detail & Related papers (2023-04-19T17:33:47Z) - Tackling Real-World Autonomous Driving using Deep Reinforcement Learning [63.3756530844707]
In this work, we propose a model-free Deep Reinforcement Learning Planner training a neural network that predicts acceleration and steering angle.
In order to deploy the system on board the real self-driving car, we also develop a module represented by a tiny neural network.
arXiv Detail & Related papers (2022-07-05T16:33:20Z) - Indy Autonomous Challenge -- Autonomous Race Cars at the Handling Limits [81.22616193933021]
The team TUM Auton-omous Motorsports will participate in the Indy Autonomous Challenge in Octo-ber 2021.
It will benchmark its self-driving software-stack by racing one out of ten autonomous Dallara AV-21 racecars at the Indianapolis Motor Speedway.
It is an ideal testing ground for the development of autonomous driving algorithms capable of mastering the most challenging and rare situations.
arXiv Detail & Related papers (2022-02-08T11:55:05Z) - Race Driver Evaluation at a Driving Simulator using a physical Model and
a Machine Learning Approach [1.9395755884693817]
We present a method to study and evaluate race drivers on a driver-in-the-loop simulator.
An overall performance score, a vehicle-trajectory score and a handling score are introduced to evaluate drivers.
We show that the neural network is accurate and robust with a root-mean-square error between 2-5% and can replace the optimisation based method.
arXiv Detail & Related papers (2022-01-27T07:32:32Z) - Autonomous Overtaking in Gran Turismo Sport Using Curriculum
Reinforcement Learning [39.757652701917166]
This work proposes a new learning-based method to tackle the autonomous overtaking problem.
We evaluate our approach using Gran Turismo Sport -- a world-leading car racing simulator.
arXiv Detail & Related papers (2021-03-26T18:06:50Z) - Learning from Simulation, Racing in Reality [126.56346065780895]
We present a reinforcement learning-based solution to autonomously race on a miniature race car platform.
We show that a policy that is trained purely in simulation can be successfully transferred to the real robotic setup.
arXiv Detail & Related papers (2020-11-26T14:58:49Z) - Super-Human Performance in Gran Turismo Sport Using Deep Reinforcement
Learning [39.719051858649216]
We propose a learning-based system for autonomous car racing by leveraging a high-fidelity physical car simulation.
We deploy our system in Gran Turismo Sport, a world-leading car simulator known for its realistic physics simulation of different race cars and tracks.
Our trained policy achieves autonomous racing performance that goes beyond what had been achieved so far by the built-in AI.
arXiv Detail & Related papers (2020-08-18T15:06:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.