Learn 2 Rage: Experiencing The Emotional Roller Coaster That Is Reinforcement Learning
- URL: http://arxiv.org/abs/2410.18462v1
- Date: Thu, 24 Oct 2024 06:16:52 GMT
- Title: Learn 2 Rage: Experiencing The Emotional Roller Coaster That Is Reinforcement Learning
- Authors: Lachlan Mares, Stefan Podgorski, Ian Reid,
- Abstract summary: This work presents the experiments and solution outline for our teams winning submission in the Learn To Race Autonomous Racing Virtual Challenge 2022 hosted by AIcrowd.
The objective of the Learn-to-Race competition is to push the boundary of autonomous technology, with a focus on achieving the safety benefits of autonomous driving.
We focused our initial efforts on implementation of Soft Actor Critic (SAC) variants.
Our goal was to learn non-trivial control of the race car exclusively from visual and geometric features, directly mapping pixels to control actions.
- Score: 5.962453678471195
- License:
- Abstract: This work presents the experiments and solution outline for our teams winning submission in the Learn To Race Autonomous Racing Virtual Challenge 2022 hosted by AIcrowd. The objective of the Learn-to-Race competition is to push the boundary of autonomous technology, with a focus on achieving the safety benefits of autonomous driving. In the description the competition is framed as a reinforcement learning (RL) challenge. We focused our initial efforts on implementation of Soft Actor Critic (SAC) variants. Our goal was to learn non-trivial control of the race car exclusively from visual and geometric features, directly mapping pixels to control actions. We made suitable modifications to the default reward policy aiming to promote smooth steering and acceleration control. The framework for the competition provided real time simulation, meaning a single episode (learning experience) is measured in minutes. Instead of pursuing parallelisation of episodes we opted to explore a more traditional approach in which the visual perception was processed (via learned operators) and fed into rule-based controllers. Such a system, while not as academically "attractive" as a pixels-to-actions approach, results in a system that requires less training, is more explainable, generalises better and is easily tuned and ultimately out-performed all other agents in the competition by a large margin.
Related papers
- FastRLAP: A System for Learning High-Speed Driving via Deep RL and
Autonomous Practicing [71.76084256567599]
We present a system that enables an autonomous small-scale RC car to drive aggressively from visual observations using reinforcement learning (RL)
Our system, FastRLAP (faster lap), trains autonomously in the real world, without human interventions, and without requiring any simulation or expert demonstrations.
The resulting policies exhibit emergent aggressive driving skills, such as timing braking and acceleration around turns and avoiding areas which impede the robot's motion, approaching the performance of a human driver using a similar first-person interface over the course of training.
arXiv Detail & Related papers (2023-04-19T17:33:47Z) - Learning Deep Sensorimotor Policies for Vision-based Autonomous Drone
Racing [52.50284630866713]
Existing systems often require hand-engineered components for state estimation, planning, and control.
This paper tackles the vision-based autonomous-drone-racing problem by learning deep sensorimotor policies.
arXiv Detail & Related papers (2022-10-26T19:03:17Z) - Generative Adversarial Imitation Learning for End-to-End Autonomous
Driving on Urban Environments [0.8122270502556374]
Generative Adversarial Imitation Learning (GAIL) can train policies without explicitly requiring to define a reward function.
We show that both of them are capable of imitating the expert trajectory from start to end after training ends.
arXiv Detail & Related papers (2021-10-16T15:04:13Z) - Vision-Based Autonomous Car Racing Using Deep Imitative Reinforcement
Learning [13.699336307578488]
Deep imitative reinforcement learning approach (DIRL) achieves agile autonomous racing using visual inputs.
We validate our algorithm both in a high-fidelity driving simulation and on a real-world 1/20-scale RC-car with limited onboard computation.
arXiv Detail & Related papers (2021-07-18T00:00:48Z) - Autonomous Overtaking in Gran Turismo Sport Using Curriculum
Reinforcement Learning [39.757652701917166]
This work proposes a new learning-based method to tackle the autonomous overtaking problem.
We evaluate our approach using Gran Turismo Sport -- a world-leading car racing simulator.
arXiv Detail & Related papers (2021-03-26T18:06:50Z) - Learn-to-Race: A Multimodal Control Environment for Autonomous Racing [23.798765519590734]
We introduce a new environment, where agents Learn-to-Race (L2R) in simulated Formula-E style racing.
Our environment, which includes a simulator and an interfacing training framework, accurately models vehicle dynamics and racing conditions.
Next, we propose the L2R task with challenging metrics, inspired by learning-to-drive challenges, Formula-E racing, and multimodal trajectory prediction for autonomous driving.
arXiv Detail & Related papers (2021-03-22T04:03:06Z) - Deep Latent Competition: Learning to Race Using Visual Control Policies
in Latent Space [63.57289340402389]
Deep Latent Competition (DLC) is a reinforcement learning algorithm that learns competitive visual control policies through self-play in imagination.
Imagined self-play reduces costly sample generation in the real world, while the latent representation enables planning to scale gracefully with observation dimensionality.
arXiv Detail & Related papers (2021-02-19T09:00:29Z) - Learning from Simulation, Racing in Reality [126.56346065780895]
We present a reinforcement learning-based solution to autonomously race on a miniature race car platform.
We show that a policy that is trained purely in simulation can be successfully transferred to the real robotic setup.
arXiv Detail & Related papers (2020-11-26T14:58:49Z) - Super-Human Performance in Gran Turismo Sport Using Deep Reinforcement
Learning [39.719051858649216]
We propose a learning-based system for autonomous car racing by leveraging a high-fidelity physical car simulation.
We deploy our system in Gran Turismo Sport, a world-leading car simulator known for its realistic physics simulation of different race cars and tracks.
Our trained policy achieves autonomous racing performance that goes beyond what had been achieved so far by the built-in AI.
arXiv Detail & Related papers (2020-08-18T15:06:44Z) - AirSim Drone Racing Lab [56.68291351736057]
AirSim Drone Racing Lab is a simulation framework for enabling machine learning research in this domain.
Our framework enables generation of racing tracks in multiple photo-realistic environments.
We used our framework to host a simulation based drone racing competition at NeurIPS 2019.
arXiv Detail & Related papers (2020-03-12T08:06:06Z) - Learning by Cheating [72.9701333689606]
We show that this challenging learning problem can be simplified by decomposing it into two stages.
We use the presented approach to train a vision-based autonomous driving system that substantially outperforms the state of the art.
Our approach achieves, for the first time, 100% success rate on all tasks in the original CARLA benchmark, sets a new record on the NoCrash benchmark, and reduces the frequency of infractions by an order of magnitude compared to the prior state of the art.
arXiv Detail & Related papers (2019-12-27T18:59:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.