Learning from Simulation, Racing in Reality
- URL: http://arxiv.org/abs/2011.13332v2
- Date: Fri, 7 May 2021 08:21:06 GMT
- Title: Learning from Simulation, Racing in Reality
- Authors: Eugenio Chisari, Alexander Liniger, Alisa Rupenyan, Luc Van Gool, John
Lygeros
- Abstract summary: We present a reinforcement learning-based solution to autonomously race on a miniature race car platform.
We show that a policy that is trained purely in simulation can be successfully transferred to the real robotic setup.
- Score: 126.56346065780895
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a reinforcement learning-based solution to autonomously race on a
miniature race car platform. We show that a policy that is trained purely in
simulation using a relatively simple vehicle model, including model
randomization, can be successfully transferred to the real robotic setup. We
achieve this by using novel policy output regularization approach and a lifted
action space which enables smooth actions but still aggressive race car
driving. We show that this regularized policy does outperform the Soft Actor
Critic (SAC) baseline method, both in simulation and on the real car, but it is
still outperformed by a Model Predictive Controller (MPC) state of the art
method. The refinement of the policy with three hours of real-world interaction
data allows the reinforcement learning policy to achieve lap times similar to
the MPC controller while reducing track constraint violations by 50%.
Related papers
- Autonomous Vehicle Controllers From End-to-End Differentiable Simulation [60.05963742334746]
We propose a differentiable simulator and design an analytic policy gradients (APG) approach to training AV controllers.
Our proposed framework brings the differentiable simulator into an end-to-end training loop, where gradients of environment dynamics serve as a useful prior to help the agent learn a more grounded policy.
We find significant improvements in performance and robustness to noise in the dynamics, as well as overall more intuitive human-like handling.
arXiv Detail & Related papers (2024-09-12T11:50:06Z) - CIMRL: Combining IMitation and Reinforcement Learning for Safe Autonomous Driving [45.05135725542318]
IMitation and Reinforcement Learning (CIMRL) approach enables training driving policies in simulation through leveraging imitative motion priors and safety constraints.
By combining RL and imitation, we demonstrate our method achieves state-of-the-art results in closed loop simulation and real world driving benchmarks.
arXiv Detail & Related papers (2024-06-13T07:31:29Z) - Tackling Real-World Autonomous Driving using Deep Reinforcement Learning [63.3756530844707]
In this work, we propose a model-free Deep Reinforcement Learning Planner training a neural network that predicts acceleration and steering angle.
In order to deploy the system on board the real self-driving car, we also develop a module represented by a tiny neural network.
arXiv Detail & Related papers (2022-07-05T16:33:20Z) - Spatiotemporal Costmap Inference for MPC via Deep Inverse Reinforcement
Learning [27.243603228431564]
We propose a new IRLRL algorithm that learns a goal-conditionedtemporal reward function.
The resulting costmap is used by Model Predictive Controllers (MPCs) to perform a task.
arXiv Detail & Related papers (2022-01-17T17:36:29Z) - Learning Interactive Driving Policies via Data-driven Simulation [125.97811179463542]
Data-driven simulators promise high data-efficiency for driving policy learning.
Small underlying datasets often lack interesting and challenging edge cases for learning interactive driving.
We propose a simulation method that uses in-painted ado vehicles for learning robust driving policies.
arXiv Detail & Related papers (2021-11-23T20:14:02Z) - Vision-Based Autonomous Car Racing Using Deep Imitative Reinforcement
Learning [13.699336307578488]
Deep imitative reinforcement learning approach (DIRL) achieves agile autonomous racing using visual inputs.
We validate our algorithm both in a high-fidelity driving simulation and on a real-world 1/20-scale RC-car with limited onboard computation.
arXiv Detail & Related papers (2021-07-18T00:00:48Z) - TrafficSim: Learning to Simulate Realistic Multi-Agent Behaviors [74.67698916175614]
We propose TrafficSim, a multi-agent behavior model for realistic traffic simulation.
In particular, we leverage an implicit latent variable model to parameterize a joint actor policy.
We show TrafficSim generates significantly more realistic and diverse traffic scenarios as compared to a diverse set of baselines.
arXiv Detail & Related papers (2021-01-17T00:29:30Z) - CARLA Real Traffic Scenarios -- novel training ground and benchmark for
autonomous driving [8.287331387095545]
This work introduces interactive traffic scenarios in the CARLA simulator, which are based on real-world traffic.
We concentrate on tactical tasks lasting several seconds, which are especially challenging for current control methods.
The CARLA Real Traffic Scenarios (CRTS) is intended to be a training and testing ground for autonomous driving systems.
arXiv Detail & Related papers (2020-12-16T13:20:39Z) - Sim-to-real reinforcement learning applied to end-to-end vehicle control [0.0]
We study end-to-end reinforcement learning on vehicle control problems, such as lane following and collision avoidance.
Our controller policy is able to control a small-scale robot to follow the right-hand lane of a real two-lane road, while its training was solely carried out in a simulation.
arXiv Detail & Related papers (2020-12-14T12:30:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.