High-speed Autonomous Drifting with Deep Reinforcement Learning
- URL: http://arxiv.org/abs/2001.01377v1
- Date: Mon, 6 Jan 2020 03:05:52 GMT
- Title: High-speed Autonomous Drifting with Deep Reinforcement Learning
- Authors: Peide Cai, Xiaodong Mei, Lei Tai, Yuxiang Sun, Ming Liu
- Abstract summary: We propose a robust drift controller without explicit motion equations.
Our controller is capable of making the vehicle drift through various sharp corners quickly and stably in the unseen map.
- Score: 15.766089739894207
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Drifting is a complicated task for autonomous vehicle control. Most
traditional methods in this area are based on motion equations derived by the
understanding of vehicle dynamics, which is difficult to be modeled precisely.
We propose a robust drift controller without explicit motion equations, which
is based on the latest model-free deep reinforcement learning algorithm soft
actor-critic. The drift control problem is formulated as a trajectory following
task, where the errorbased state and reward are designed. After being trained
on tracks with different levels of difficulty, our controller is capable of
making the vehicle drift through various sharp corners quickly and stably in
the unseen map. The proposed controller is further shown to have excellent
generalization ability, which can directly handle unseen vehicle types with
different physical properties, such as mass, tire friction, etc.
Related papers
- MagicDriveDiT: High-Resolution Long Video Generation for Autonomous Driving with Adaptive Control [68.74166535159311]
We introduce MagicDriveDiT, a novel approach based on the DiT architecture.
By incorporating spatial-temporal conditional encoding, MagicDriveDiT achieves precise control over spatial-temporal latents.
Experiments show its superior performance in generating realistic street scene videos with higher resolution and more frames.
arXiv Detail & Related papers (2024-11-21T03:13:30Z) - Reference-Free Formula Drift with Reinforcement Learning: From Driving Data to Tire Energy-Inspired, Real-World Policies [1.3499500088995464]
Real-time drifting strategies put the car where needed while bypassing expensive trajectory optimization.
We design a reinforcement learning agent that builds on the concept of tire energy absorption to autonomously drift through changing and complex waypoint configurations.
Experiments on a Toyota GR Supra and Lexus LC 500 show that the agent is capable of drifting smoothly through varying waypoint configurations with tracking error as low as 10 cm while stably pushing the vehicles to sideslip angles of up to 63deg.
arXiv Detail & Related papers (2024-10-28T13:10:15Z) - Learning Inverse Kinodynamics for Autonomous Vehicle Drifting [0.0]
We learn the kinodynamic model of a small autonomous vehicle, and observe the effect it has on motion planning, specifically autonomous drifting.
Our approach is able to learn a kinodynamic model for high-speed circular navigation, and is able to avoid obstacles on an autonomous drift at high speed by correcting an executed curvature for loose drifts.
arXiv Detail & Related papers (2024-02-22T19:24:56Z) - Partial End-to-end Reinforcement Learning for Robustness Against Modelling Error in Autonomous Racing [0.0]
This paper addresses the issue of increasing the performance of reinforcement learning (RL) solutions for autonomous racing cars.
We propose a partial end-to-end algorithm that decouples the planning and control tasks.
By leveraging the robustness of a classical controller, our partial end-to-end driving algorithm exhibits better robustness towards model mismatches than standard end-to-end algorithms.
arXiv Detail & Related papers (2023-12-11T14:27:10Z) - FastRLAP: A System for Learning High-Speed Driving via Deep RL and
Autonomous Practicing [71.76084256567599]
We present a system that enables an autonomous small-scale RC car to drive aggressively from visual observations using reinforcement learning (RL)
Our system, FastRLAP (faster lap), trains autonomously in the real world, without human interventions, and without requiring any simulation or expert demonstrations.
The resulting policies exhibit emergent aggressive driving skills, such as timing braking and acceleration around turns and avoiding areas which impede the robot's motion, approaching the performance of a human driver using a similar first-person interface over the course of training.
arXiv Detail & Related papers (2023-04-19T17:33:47Z) - Motion Planning and Control for Multi Vehicle Autonomous Racing at High
Speeds [100.61456258283245]
This paper presents a multi-layer motion planning and control architecture for autonomous racing.
The proposed solution has been applied on a Dallara AV-21 racecar and tested at oval race tracks achieving lateral accelerations up to 25 $m/s2$.
arXiv Detail & Related papers (2022-07-22T15:16:54Z) - Tackling Real-World Autonomous Driving using Deep Reinforcement Learning [63.3756530844707]
In this work, we propose a model-free Deep Reinforcement Learning Planner training a neural network that predicts acceleration and steering angle.
In order to deploy the system on board the real self-driving car, we also develop a module represented by a tiny neural network.
arXiv Detail & Related papers (2022-07-05T16:33:20Z) - Autonomous Overtaking in Gran Turismo Sport Using Curriculum
Reinforcement Learning [39.757652701917166]
This work proposes a new learning-based method to tackle the autonomous overtaking problem.
We evaluate our approach using Gran Turismo Sport -- a world-leading car racing simulator.
arXiv Detail & Related papers (2021-03-26T18:06:50Z) - Reinforcement Learning for Robust Parameterized Locomotion Control of
Bipedal Robots [121.42930679076574]
We present a model-free reinforcement learning framework for training robust locomotion policies in simulation.
domain randomization is used to encourage the policies to learn behaviors that are robust across variations in system dynamics.
We demonstrate this on versatile walking behaviors such as tracking a target walking velocity, walking height, and turning yaw.
arXiv Detail & Related papers (2021-03-26T07:14:01Z) - Learning Quadrupedal Locomotion over Challenging Terrain [68.51539602703662]
Legged locomotion can dramatically expand the operational domains of robotics.
Conventional controllers for legged locomotion are based on elaborate state machines that explicitly trigger the execution of motion primitives and reflexes.
Here we present a radically robust controller for legged locomotion in challenging natural environments.
arXiv Detail & Related papers (2020-10-21T19:11:20Z) - BayesRace: Learning to race autonomously using prior experience [20.64931380046805]
We present a model-based planning and control framework for autonomous racing.
Our approach alleviates the gap induced by simulation-based controller design by learning from on-board sensor measurements.
arXiv Detail & Related papers (2020-05-10T19:15:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.