On the Verge of Solving Rocket League using Deep Reinforcement Learning
and Sim-to-sim Transfer
- URL: http://arxiv.org/abs/2205.05061v1
- Date: Tue, 10 May 2022 17:37:19 GMT
- Title: On the Verge of Solving Rocket League using Deep Reinforcement Learning
and Sim-to-sim Transfer
- Authors: Marco Pleines, Konstantin Ramthun, Yannik Wegener, Hendrik Meyer,
Matthias Pallasch, Sebastian Prior, Jannik Dr\"ogem\"uller, Leon
B\"uttinghaus, Thilo R\"othemeyer, Alexander Kaschwig, Oliver Chmurzynski,
Frederik Rohkr\"ahmer, Roman Kalkreuth, Frank Zimmer, Mike Preuss
- Abstract summary: This work explores a third way that is established in robotics, namely sim-to-real transfer.
In the case of Rocket League, we demonstrate that single behaviors of goalies and strikers can be successfully learned using Deep Reinforcement Learning.
- Score: 42.87143421242222
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Autonomously trained agents that are supposed to play video games reasonably
well rely either on fast simulation speeds or heavy parallelization across
thousands of machines running concurrently. This work explores a third way that
is established in robotics, namely sim-to-real transfer, or if the game is
considered a simulation itself, sim-to-sim transfer. In the case of Rocket
League, we demonstrate that single behaviors of goalies and strikers can be
successfully learned using Deep Reinforcement Learning in the simulation
environment and transferred back to the original game. Although the implemented
training simulation is to some extent inaccurate, the goalkeeping agent saves
nearly 100% of its faced shots once transferred, while the striking agent
scores in about 75% of cases. Therefore, the trained agent is robust enough and
able to generalize to the target domain of Rocket League.
Related papers
- DrEureka: Language Model Guided Sim-To-Real Transfer [64.14314476811806]
Transferring policies learned in simulation to the real world is a promising strategy for acquiring robot skills at scale.
In this paper, we investigate using Large Language Models (LLMs) to automate and accelerate sim-to-real design.
Our approach is capable of solving novel robot tasks, such as quadruped balancing and walking atop a yoga ball.
arXiv Detail & Related papers (2024-06-04T04:53:05Z) - Learning Agile Soccer Skills for a Bipedal Robot with Deep Reinforcement Learning [26.13655448415553]
Deep Reinforcement Learning (Deep RL) is able to synthesize sophisticated and safe movement skills for a low-cost, miniature humanoid robot.
We used Deep RL to train a humanoid robot with 20 actuated joints to play a simplified one-versus-one (1v1) soccer game.
The resulting agent exhibits robust and dynamic movement skills such as rapid fall recovery, walking, turning, kicking and more.
arXiv Detail & Related papers (2023-04-26T16:25:54Z) - Sim-and-Real Reinforcement Learning for Manipulation: A Consensus-based
Approach [4.684126055213616]
We propose a Consensus-based Sim-And-Real deep reinforcement learning algorithm (CSAR) for manipulator pick-and-place tasks.
We train the agents in simulators and the real world to get the optimal policies for both sim-and-real worlds.
arXiv Detail & Related papers (2023-02-26T22:27:23Z) - Sim-to-Real via Sim-to-Seg: End-to-end Off-road Autonomous Driving
Without Real Data [56.49494318285391]
We present Sim2Seg, a re-imagining of RCAN that crosses the visual reality gap for off-road autonomous driving.
This is done by learning to translate randomized simulation images into simulated segmentation and depth maps.
This allows us to train an end-to-end RL policy in simulation, and directly deploy in the real-world.
arXiv Detail & Related papers (2022-10-25T17:50:36Z) - Combining Off and On-Policy Training in Model-Based Reinforcement
Learning [77.34726150561087]
We propose a way to obtain off-policy targets using data from simulated games in MuZero.
Our results show that these targets speed up the training process and lead to faster convergence and higher rewards.
arXiv Detail & Related papers (2021-02-24T10:47:26Z) - Reactive Long Horizon Task Execution via Visual Skill and Precondition
Models [59.76233967614774]
We describe an approach for sim-to-real training that can accomplish unseen robotic tasks using models learned in simulation to ground components of a simple task planner.
We show an increase in success rate from 91.6% to 98% in simulation and from 10% to 80% success rate in the real-world as compared with naive baselines.
arXiv Detail & Related papers (2020-11-17T15:24:01Z) - Sim-to-Real Transfer for Vision-and-Language Navigation [70.86250473583354]
We study the problem of releasing a robot in a previously unseen environment, and having it follow unconstrained natural language navigation instructions.
Recent work on the task of Vision-and-Language Navigation (VLN) has achieved significant progress in simulation.
To assess the implications of this work for robotics, we transfer a VLN agent trained in simulation to a physical robot.
arXiv Detail & Related papers (2020-11-07T16:49:04Z) - Robust Reinforcement Learning-based Autonomous Driving Agent for
Simulation and Real World [0.0]
We present a DRL-based algorithm that is capable of performing autonomous robot control using Deep Q-Networks (DQN)
In our approach, the agent is trained in a simulated environment and it is able to navigate both in a simulated and real-world environment.
The trained agent is able to run on limited hardware resources and its performance is comparable to state-of-the-art approaches.
arXiv Detail & Related papers (2020-09-23T15:23:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.