Godot Reinforcement Learning Agents
- URL: http://arxiv.org/abs/2112.03636v1
- Date: Tue, 7 Dec 2021 11:24:34 GMT
- Title: Godot Reinforcement Learning Agents
- Authors: Edward Beeching, Jilles Debangoye, Olivier Simonin, Christian Wolf
- Abstract summary: The Godot RL Agents interface allows the design, creation and learning of agent behaviors in challenging 2D and 3D environments.
The framework is a versatile tool that allows researchers and game designers the ability to create environments with discrete, continuous and mixed action spaces.
- Score: 10.413185820687021
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present Godot Reinforcement Learning (RL) Agents, an open-source interface
for developing environments and agents in the Godot Game Engine. The Godot RL
Agents interface allows the design, creation and learning of agent behaviors in
challenging 2D and 3D environments with various on-policy and off-policy Deep
RL algorithms. We provide a standard Gym interface, with wrappers for learning
in the Ray RLlib and Stable Baselines RL frameworks. This allows users access
to over 20 state of the art on-policy, off-policy and multi-agent RL
algorithms. The framework is a versatile tool that allows researchers and game
designers the ability to create environments with discrete, continuous and
mixed action spaces. The interface is relatively performant, with 12k
interactions per second on a high end laptop computer, when parallized on 4 CPU
cores. An overview video is available here: https://youtu.be/g1MlZSFqIj4
Related papers
- Kinetix: Investigating the Training of General Agents through Open-Ended Physics-Based Control Tasks [3.479490713357225]
We procedurally generate tens of millions of 2D physics-based tasks and use these to train a general reinforcement learning (RL) agent for physical control.
Kinetix is an open-ended space of physics-based RL environments that can represent tasks ranging from robotic locomotion and grasping to video games and classic RL environments.
Our trained agent exhibits strong physical reasoning capabilities, being able to zero-shot solve unseen human-designed environments.
arXiv Detail & Related papers (2024-10-30T16:59:41Z) - A Benchmark Environment for Offline Reinforcement Learning in Racing Games [54.83171948184851]
Offline Reinforcement Learning (ORL) is a promising approach to reduce the high sample complexity of traditional Reinforcement Learning (RL)
This paper introduces OfflineMania, a novel environment for ORL research.
It is inspired by the iconic TrackMania series and developed using the Unity 3D game engine.
arXiv Detail & Related papers (2024-07-12T16:44:03Z) - Centralized control for multi-agent RL in a complex Real-Time-Strategy
game [0.0]
Multi-agent Reinforcement learning (MARL) studies the behaviour of multiple learning agents that coexist in a shared environment.
MARL is more challenging than single-agent RL because it involves more complex learning dynamics.
This project provides the end-to-end experience of applying RL in the Lux AI v2 Kaggle competition.
arXiv Detail & Related papers (2023-04-25T17:19:05Z) - JORLDY: a fully customizable open source framework for reinforcement
learning [3.1864456096282696]
Reinforcement Learning (RL) has been actively researched in both academic and industrial fields.
JORLDY provides more than 20 widely used RL algorithms which are implemented with Pytorch.
JORLDY supports multiple RL environments which include OpenAI gym, Unity ML-Agents, Mujoco, Super Mario Bros and Procgen.
arXiv Detail & Related papers (2022-04-11T06:28:27Z) - ElegantRL-Podracer: Scalable and Elastic Library for Cloud-Native Deep
Reinforcement Learning [141.58588761593955]
We present a library ElegantRL-podracer for cloud-native deep reinforcement learning.
It efficiently supports millions of cores to carry out massively parallel training at multiple levels.
At a low-level, each pod simulates agent-environment interactions in parallel by fully utilizing nearly 7,000 GPU cores in a single GPU.
arXiv Detail & Related papers (2021-12-11T06:31:21Z) - Architecting and Visualizing Deep Reinforcement Learning Models [77.34726150561087]
Deep Reinforcement Learning (DRL) is a theory that aims to teach computers how to communicate with each other.
In this paper, we present a new Atari Pong game environment, a policy gradient based DRL model, a real-time network visualization, and an interactive display to help build intuition and awareness of the mechanics of DRL inference.
arXiv Detail & Related papers (2021-12-02T17:48:26Z) - MetaDrive: Composing Diverse Driving Scenarios for Generalizable
Reinforcement Learning [25.191567110519866]
We develop a new driving simulation platform called MetaDrive for the study of reinforcement learning algorithms.
Based on MetaDrive, we construct a variety of RL tasks and baselines in both single-agent and multi-agent settings.
arXiv Detail & Related papers (2021-09-26T18:34:55Z) - Deep Policy Networks for NPC Behaviors that Adapt to Changing Design
Parameters in Roguelike Games [137.86426963572214]
Turn-based strategy games like Roguelikes, for example, present unique challenges to Deep Reinforcement Learning (DRL)
We propose two network architectures to better handle complex categorical state spaces and to mitigate the need for retraining forced by design decisions.
arXiv Detail & Related papers (2020-12-07T08:47:25Z) - The NetHack Learning Environment [79.06395964379107]
We present the NetHack Learning Environment (NLE), a procedurally generated rogue-like environment for Reinforcement Learning research.
We argue that NetHack is sufficiently complex to drive long-term research on problems such as exploration, planning, skill acquisition, and language-conditioned RL.
We demonstrate empirical success for early stages of the game using a distributed Deep RL baseline and Random Network Distillation exploration.
arXiv Detail & Related papers (2020-06-24T14:12:56Z) - Forgetful Experience Replay in Hierarchical Reinforcement Learning from
Demonstrations [55.41644538483948]
In this paper, we propose a combination of approaches that allow the agent to use low-quality demonstrations in complex vision-based environments.
Our proposed goal-oriented structuring of replay buffer allows the agent to automatically highlight sub-goals for solving complex hierarchical tasks in demonstrations.
The solution based on our algorithm beats all the solutions for the famous MineRL competition and allows the agent to mine a diamond in the Minecraft environment.
arXiv Detail & Related papers (2020-06-17T15:38:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.