MiniHack the Planet: A Sandbox for Open-Ended Reinforcement Learning
Research
- URL: http://arxiv.org/abs/2109.13202v1
- Date: Mon, 27 Sep 2021 17:22:42 GMT
- Title: MiniHack the Planet: A Sandbox for Open-Ended Reinforcement Learning
Research
- Authors: Mikayel Samvelyan, Robert Kirk, Vitaly Kurin, Jack Parker-Holder,
Minqi Jiang, Eric Hambro, Fabio Petroni, Heinrich K\"uttler, Edward
Grefenstette, Tim Rockt\"aschel
- Abstract summary: MiniHack is a powerful sandbox framework for easily designing novel deep reinforcement learning environments.
By leveraging the full set of entities and environment dynamics from NetHack, MiniHack allows designing custom RL testbeds.
In addition to a variety of RL tasks and baselines, MiniHack can wrap existing RL benchmarks and provide ways to seamlessly add additional complexity.
- Score: 24.9044606044585
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The progress in deep reinforcement learning (RL) is heavily driven by the
availability of challenging benchmarks used for training agents. However,
benchmarks that are widely adopted by the community are not explicitly designed
for evaluating specific capabilities of RL methods. While there exist
environments for assessing particular open problems in RL (such as exploration,
transfer learning, unsupervised environment design, or even language-assisted
RL), it is generally difficult to extend these to richer, more complex
environments once research goes beyond proof-of-concept results. We present
MiniHack, a powerful sandbox framework for easily designing novel RL
environments. MiniHack is a one-stop shop for RL experiments with environments
ranging from small rooms to complex, procedurally generated worlds. By
leveraging the full set of entities and environment dynamics from NetHack, one
of the richest grid-based video games, MiniHack allows designing custom RL
testbeds that are fast and convenient to use. With this sandbox framework,
novel environments can be designed easily, either using a human-readable
description language or a simple Python interface. In addition to a variety of
RL tasks and baselines, MiniHack can wrap existing RL benchmarks and provide
ways to seamlessly add additional complexity.
Related papers
- Gymnasium: A Standard Interface for Reinforcement Learning Environments [5.7144222327514616]
Reinforcement Learning (RL) is a growing field that has the potential to revolutionize many areas of artificial intelligence.
Despite its promise, RL research is often hindered by the lack of standardization in environment and algorithm implementations.
Gymnasium is an open-source library that provides a standard API for RL environments.
arXiv Detail & Related papers (2024-07-24T06:35:05Z) - A Benchmark Environment for Offline Reinforcement Learning in Racing Games [54.83171948184851]
Offline Reinforcement Learning (ORL) is a promising approach to reduce the high sample complexity of traditional Reinforcement Learning (RL)
This paper introduces OfflineMania, a novel environment for ORL research.
It is inspired by the iconic TrackMania series and developed using the Unity 3D game engine.
arXiv Detail & Related papers (2024-07-12T16:44:03Z) - Craftium: An Extensible Framework for Creating Reinforcement Learning Environments [0.5461938536945723]
This paper presents Craftium, a novel framework for exploring and creating rich 3D visual RL environments.
Craftium builds upon the Minetest game engine and the popular Gymnasium API.
arXiv Detail & Related papers (2024-07-04T14:38:02Z) - RLtools: A Fast, Portable Deep Reinforcement Learning Library for Continuous Control [7.259696592534715]
Deep Reinforcement Learning (RL) can yield capable agents and control policies in several domains but is commonly plagued by prohibitively long training times.
We present RLtools, a dependency-free, header-only, pure C++ library for deep supervised and reinforcement learning.
arXiv Detail & Related papers (2023-06-06T09:26:43Z) - Automated Reinforcement Learning (AutoRL): A Survey and Open Problems [92.73407630874841]
Automated Reinforcement Learning (AutoRL) involves not only standard applications of AutoML but also includes additional challenges unique to RL.
We provide a common taxonomy, discuss each area in detail and pose open problems which would be of interest to researchers going forward.
arXiv Detail & Related papers (2022-01-11T12:41:43Z) - RL-DARTS: Differentiable Architecture Search for Reinforcement Learning [62.95469460505922]
We introduce RL-DARTS, one of the first applications of Differentiable Architecture Search (DARTS) in reinforcement learning (RL)
By replacing the image encoder with a DARTS supernet, our search method is sample-efficient, requires minimal extra compute resources, and is also compatible with off-policy and on-policy RL algorithms, needing only minor changes in preexisting code.
We show that the supernet gradually learns better cells, leading to alternative architectures which can be highly competitive against manually designed policies, but also verify previous design choices for RL policies.
arXiv Detail & Related papers (2021-06-04T03:08:43Z) - EasyRL: A Simple and Extensible Reinforcement Learning Framework [3.2173369911280023]
EasyRL provides an interactive graphical user interface for users to train and evaluate RL agents.
EasyRL does not require programming knowledge for training and testing simple built-in RL agents.
EasyRL also supports custom RL agents and environments, which can be highly beneficial for RL researchers in evaluating and comparing their RL models.
arXiv Detail & Related papers (2020-08-04T17:02:56Z) - WordCraft: An Environment for Benchmarking Commonsense Agents [107.20421897619002]
We propose WordCraft, an RL environment based on Little Alchemy 2.
This lightweight environment is fast to run and built upon entities and relations inspired by real-world semantics.
arXiv Detail & Related papers (2020-07-17T18:40:46Z) - The NetHack Learning Environment [79.06395964379107]
We present the NetHack Learning Environment (NLE), a procedurally generated rogue-like environment for Reinforcement Learning research.
We argue that NetHack is sufficiently complex to drive long-term research on problems such as exploration, planning, skill acquisition, and language-conditioned RL.
We demonstrate empirical success for early stages of the game using a distributed Deep RL baseline and Random Network Distillation exploration.
arXiv Detail & Related papers (2020-06-24T14:12:56Z) - MushroomRL: Simplifying Reinforcement Learning Research [60.70556446270147]
MushroomRL is an open-source Python library developed to simplify the process of implementing and running Reinforcement Learning (RL) experiments.
Compared to other available libraries, MushroomRL has been created with the purpose of providing a comprehensive and flexible framework to minimize the effort in implementing and testing novel RL methodologies.
arXiv Detail & Related papers (2020-01-04T17:23:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.