Reference-Free Formula Drift with Reinforcement Learning: From Driving Data to Tire Energy-Inspired, Real-World Policies
- URL: http://arxiv.org/abs/2410.20990v1
- Date: Mon, 28 Oct 2024 13:10:15 GMT
- Title: Reference-Free Formula Drift with Reinforcement Learning: From Driving Data to Tire Energy-Inspired, Real-World Policies
- Authors: Franck Djeumou, Michael Thompson, Makoto Suminaka, John Subosits,
- Abstract summary: Real-time drifting strategies put the car where needed while bypassing expensive trajectory optimization.
We design a reinforcement learning agent that builds on the concept of tire energy absorption to autonomously drift through changing and complex waypoint configurations.
Experiments on a Toyota GR Supra and Lexus LC 500 show that the agent is capable of drifting smoothly through varying waypoint configurations with tracking error as low as 10 cm while stably pushing the vehicles to sideslip angles of up to 63deg.
- Score: 1.3499500088995464
- License:
- Abstract: The skill to drift a car--i.e., operate in a state of controlled oversteer like professional drivers--could give future autonomous cars maximum flexibility when they need to retain control in adverse conditions or avoid collisions. We investigate real-time drifting strategies that put the car where needed while bypassing expensive trajectory optimization. To this end, we design a reinforcement learning agent that builds on the concept of tire energy absorption to autonomously drift through changing and complex waypoint configurations while safely staying within track bounds. We achieve zero-shot deployment on the car by training the agent in a simulation environment built on top of a neural stochastic differential equation vehicle model learned from pre-collected driving data. Experiments on a Toyota GR Supra and Lexus LC 500 show that the agent is capable of drifting smoothly through varying waypoint configurations with tracking error as low as 10 cm while stably pushing the vehicles to sideslip angles of up to 63{\deg}.
Related papers
- WROOM: An Autonomous Driving Approach for Off-Road Navigation [17.74237088460657]
We design an end-to-end reinforcement learning (RL) system for an autonomous vehicle in off-road environments.
We warm-start the agent by imitating a rule-based controller and utilize Proximal Policy Optimization (PPO) to improve the policy.
We propose a novel simulation environment to replicate off-road driving scenarios and deploy our proposed approach on a real buggy RC car.
arXiv Detail & Related papers (2024-04-12T23:55:59Z) - RACER: Rational Artificial Intelligence Car-following-model Enhanced by
Reality [51.244807332133696]
This paper introduces RACER, a cutting-edge deep learning car-following model to predict Adaptive Cruise Control (ACC) driving behavior.
Unlike conventional models, RACER effectively integrates Rational Driving Constraints (RDCs), crucial tenets of actual driving.
RACER excels across key metrics, such as acceleration, velocity, and spacing, registering zero violations.
arXiv Detail & Related papers (2023-12-12T06:21:30Z) - Comprehensive Training and Evaluation on Deep Reinforcement Learning for
Automated Driving in Various Simulated Driving Maneuvers [0.4241054493737716]
This study implements, evaluating, and comparing the two DRL algorithms, Deep Q-networks (DQN) and Trust Region Policy Optimization (TRPO)
Models trained on the designed ComplexRoads environment can adapt well to other driving maneuvers with promising overall performance.
arXiv Detail & Related papers (2023-06-20T11:41:01Z) - FastRLAP: A System for Learning High-Speed Driving via Deep RL and
Autonomous Practicing [71.76084256567599]
We present a system that enables an autonomous small-scale RC car to drive aggressively from visual observations using reinforcement learning (RL)
Our system, FastRLAP (faster lap), trains autonomously in the real world, without human interventions, and without requiring any simulation or expert demonstrations.
The resulting policies exhibit emergent aggressive driving skills, such as timing braking and acceleration around turns and avoiding areas which impede the robot's motion, approaching the performance of a human driver using a similar first-person interface over the course of training.
arXiv Detail & Related papers (2023-04-19T17:33:47Z) - Motion Planning and Control for Multi Vehicle Autonomous Racing at High
Speeds [100.61456258283245]
This paper presents a multi-layer motion planning and control architecture for autonomous racing.
The proposed solution has been applied on a Dallara AV-21 racecar and tested at oval race tracks achieving lateral accelerations up to 25 $m/s2$.
arXiv Detail & Related papers (2022-07-22T15:16:54Z) - Tackling Real-World Autonomous Driving using Deep Reinforcement Learning [63.3756530844707]
In this work, we propose a model-free Deep Reinforcement Learning Planner training a neural network that predicts acceleration and steering angle.
In order to deploy the system on board the real self-driving car, we also develop a module represented by a tiny neural network.
arXiv Detail & Related papers (2022-07-05T16:33:20Z) - COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked
Vehicles [54.61668577827041]
We introduce COOPERNAUT, an end-to-end learning model that uses cross-vehicle perception for vision-based cooperative driving.
Our experiments on AutoCastSim suggest that our cooperative perception driving models lead to a 40% improvement in average success rate.
arXiv Detail & Related papers (2022-05-04T17:55:12Z) - Indy Autonomous Challenge -- Autonomous Race Cars at the Handling Limits [81.22616193933021]
The team TUM Auton-omous Motorsports will participate in the Indy Autonomous Challenge in Octo-ber 2021.
It will benchmark its self-driving software-stack by racing one out of ten autonomous Dallara AV-21 racecars at the Indianapolis Motor Speedway.
It is an ideal testing ground for the development of autonomous driving algorithms capable of mastering the most challenging and rare situations.
arXiv Detail & Related papers (2022-02-08T11:55:05Z) - Autonomous Overtaking in Gran Turismo Sport Using Curriculum
Reinforcement Learning [39.757652701917166]
This work proposes a new learning-based method to tackle the autonomous overtaking problem.
We evaluate our approach using Gran Turismo Sport -- a world-leading car racing simulator.
arXiv Detail & Related papers (2021-03-26T18:06:50Z) - BayesRace: Learning to race autonomously using prior experience [20.64931380046805]
We present a model-based planning and control framework for autonomous racing.
Our approach alleviates the gap induced by simulation-based controller design by learning from on-board sensor measurements.
arXiv Detail & Related papers (2020-05-10T19:15:06Z) - High-speed Autonomous Drifting with Deep Reinforcement Learning [15.766089739894207]
We propose a robust drift controller without explicit motion equations.
Our controller is capable of making the vehicle drift through various sharp corners quickly and stably in the unseen map.
arXiv Detail & Related papers (2020-01-06T03:05:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.