The NetHack Learning Environment
- URL: http://arxiv.org/abs/2006.13760v2
- Date: Tue, 1 Dec 2020 11:05:57 GMT
- Title: The NetHack Learning Environment
- Authors: Heinrich K\"uttler and Nantas Nardelli and Alexander H. Miller and
Roberta Raileanu and Marco Selvatici and Edward Grefenstette and Tim
Rockt\"aschel
- Abstract summary: We present the NetHack Learning Environment (NLE), a procedurally generated rogue-like environment for Reinforcement Learning research.
We argue that NetHack is sufficiently complex to drive long-term research on problems such as exploration, planning, skill acquisition, and language-conditioned RL.
We demonstrate empirical success for early stages of the game using a distributed Deep RL baseline and Random Network Distillation exploration.
- Score: 79.06395964379107
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Progress in Reinforcement Learning (RL) algorithms goes hand-in-hand with the
development of challenging environments that test the limits of current
methods. While existing RL environments are either sufficiently complex or
based on fast simulation, they are rarely both. Here, we present the NetHack
Learning Environment (NLE), a scalable, procedurally generated, stochastic,
rich, and challenging environment for RL research based on the popular
single-player terminal-based roguelike game, NetHack. We argue that NetHack is
sufficiently complex to drive long-term research on problems such as
exploration, planning, skill acquisition, and language-conditioned RL, while
dramatically reducing the computational resources required to gather a large
amount of experience. We compare NLE and its task suite to existing
alternatives, and discuss why it is an ideal medium for testing the robustness
and systematic generalization of RL agents. We demonstrate empirical success
for early stages of the game using a distributed Deep RL baseline and Random
Network Distillation exploration, alongside qualitative analysis of various
agents trained in the environment. NLE is open source at
https://github.com/facebookresearch/nle.
Related papers
- Reinforcing Competitive Multi-Agents for Playing So Long Sucker [0.393259574660092]
This paper examines the use of classical deep reinforcement learning (DRL) algorithms, DQN, DDQN, and Dueling DQN, in the strategy game So Long Sucker.
The study's primary goal is to teach autonomous agents the game's rules and strategies using classical DRL methods.
arXiv Detail & Related papers (2024-11-17T12:38:13Z) - A Benchmark Environment for Offline Reinforcement Learning in Racing Games [54.83171948184851]
Offline Reinforcement Learning (ORL) is a promising approach to reduce the high sample complexity of traditional Reinforcement Learning (RL)
This paper introduces OfflineMania, a novel environment for ORL research.
It is inspired by the iconic TrackMania series and developed using the Unity 3D game engine.
arXiv Detail & Related papers (2024-07-12T16:44:03Z) - SPRING: Studying the Paper and Reasoning to Play Games [102.5587155284795]
We propose a novel approach, SPRING, to read the game's original academic paper and use the knowledge learned to reason and play the game through a large language model (LLM)
In experiments, we study the quality of in-context "reasoning" induced by different forms of prompts under the setting of the Crafter open-world environment.
Our experiments suggest that LLMs, when prompted with consistent chain-of-thought, have great potential in completing sophisticated high-level trajectories.
arXiv Detail & Related papers (2023-05-24T18:14:35Z) - A Survey of Meta-Reinforcement Learning [69.76165430793571]
We cast the development of better RL algorithms as a machine learning problem itself in a process called meta-RL.
We discuss how, at a high level, meta-RL research can be clustered based on the presence of a task distribution and the learning budget available for each individual task.
We conclude by presenting the open problems on the path to making meta-RL part of the standard toolbox for a deep RL practitioner.
arXiv Detail & Related papers (2023-01-19T12:01:41Z) - A Survey on Explainable Reinforcement Learning: Concepts, Algorithms,
Challenges [38.70863329476517]
Reinforcement Learning (RL) is a popular machine learning paradigm where intelligent agents interact with the environment to fulfill a long-term goal.
Despite the encouraging results achieved, the deep neural network-based backbone is widely deemed as a black box that impedes practitioners to trust and employ trained agents in realistic scenarios where high security and reliability are essential.
To alleviate this issue, a large volume of literature devoted to shedding light on the inner workings of the intelligent agents has been proposed, by constructing intrinsic interpretability or post-hoc explainability.
arXiv Detail & Related papers (2022-11-12T13:52:06Z) - MiniHack the Planet: A Sandbox for Open-Ended Reinforcement Learning
Research [24.9044606044585]
MiniHack is a powerful sandbox framework for easily designing novel deep reinforcement learning environments.
By leveraging the full set of entities and environment dynamics from NetHack, MiniHack allows designing custom RL testbeds.
In addition to a variety of RL tasks and baselines, MiniHack can wrap existing RL benchmarks and provide ways to seamlessly add additional complexity.
arXiv Detail & Related papers (2021-09-27T17:22:42Z) - Continuous Coordination As a Realistic Scenario for Lifelong Learning [6.044372319762058]
We introduce a multi-agent lifelong learning testbed that supports both zero-shot and few-shot settings.
We evaluate several recent MARL methods, and benchmark state-of-the-art LLL algorithms in limited memory and computation.
We empirically show that the agents trained in our setup are able to coordinate well with unseen agents, without any additional assumptions made by previous works.
arXiv Detail & Related papers (2021-03-04T18:44:03Z) - DeepCrawl: Deep Reinforcement Learning for Turn-based Strategy Games [137.86426963572214]
We introduce DeepCrawl, a fully-playable Roguelike prototype for iOS and Android in which all agents are controlled by policy networks trained using Deep Reinforcement Learning (DRL)
Our aim is to understand whether recent advances in DRL can be used to develop convincing behavioral models for non-player characters in videogames.
arXiv Detail & Related papers (2020-12-03T13:53:29Z) - RL Unplugged: A Suite of Benchmarks for Offline Reinforcement Learning [108.9599280270704]
We propose a benchmark called RL Unplugged to evaluate and compare offline RL methods.
RL Unplugged includes data from a diverse range of domains including games and simulated motor control problems.
We will release data for all our tasks and open-source all algorithms presented in this paper.
arXiv Detail & Related papers (2020-06-24T17:14:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.