LuckyMera: a Modular AI Framework for Building Hybrid NetHack Agents
- URL: http://arxiv.org/abs/2307.08532v1
- Date: Mon, 17 Jul 2023 14:46:59 GMT
- Title: LuckyMera: a Modular AI Framework for Building Hybrid NetHack Agents
- Authors: Luigi Quarantiello, Simone Marzeddu, Antonio Guzzi, Vincenzo Lomonaco
- Abstract summary: Roguelike video games offer a good trade-off in terms of complexity of the environment and computational costs.
We present LuckyMera, a flexible, modular, generalization and AI framework built around NetHack.
LuckyMera comes with a set of off-the-shelf symbolic and neural modules (called "skills"): these modules can be either hard-coded behaviors, or neural Reinforcement Learning approaches.
- Score: 7.23273667916516
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the last few decades we have witnessed a significant development in
Artificial Intelligence (AI) thanks to the availability of a variety of
testbeds, mostly based on simulated environments and video games. Among those,
roguelike games offer a very good trade-off in terms of complexity of the
environment and computational costs, which makes them perfectly suited to test
AI agents generalization capabilities. In this work, we present LuckyMera, a
flexible, modular, extensible and configurable AI framework built around
NetHack, a popular terminal-based, single-player roguelike video game. This
library is aimed at simplifying and speeding up the development of AI agents
capable of successfully playing the game and offering a high-level interface
for designing game strategies. LuckyMera comes with a set of off-the-shelf
symbolic and neural modules (called "skills"): these modules can be either
hard-coded behaviors, or neural Reinforcement Learning approaches, with the
possibility of creating compositional hybrid solutions. Additionally, LuckyMera
comes with a set of utility features to save its experiences in the form of
trajectories for further analysis and to use them as datasets to train neural
modules, with a direct interface to the NetHack Learning Environment and
MiniHack. Through an empirical evaluation we validate our skills implementation
and propose a strong baseline agent that can reach state-of-the-art
performances in the complete NetHack game. LuckyMera is open-source and
available at https://github.com/Pervasive-AI-Lab/LuckyMera.
Related papers
- Autoverse: An Evolvable Game Language for Learning Robust Embodied Agents [2.624282086797512]
We introduce Autoverse, an evolvable, domain-specific language for single-player 2D grid-based games.
We demonstrate its use as a scalable training ground for Open-Ended Learning (OEL) algorithms.
arXiv Detail & Related papers (2024-07-05T02:18:02Z) - NetHack is Hard to Hack [37.24009814390211]
In the NeurIPS 2021 NetHack Challenge, symbolic agents outperformed neural approaches by over four times in median game score.
We present an extensive study on neural policy learning for NetHack.
We produce a state-of-the-art neural agent that surpasses previous fully neural policies by 127% in offline settings and 25% in online settings on median game score.
arXiv Detail & Related papers (2023-05-30T17:30:17Z) - MineDojo: Building Open-Ended Embodied Agents with Internet-Scale
Knowledge [70.47759528596711]
We introduce MineDojo, a new framework built on the popular Minecraft game.
We propose a novel agent learning algorithm that leverages large pre-trained video-language models as a learned reward function.
Our agent is able to solve a variety of open-ended tasks specified in free-form language without any manually designed dense shaping reward.
arXiv Detail & Related papers (2022-06-17T15:53:05Z) - Godot Reinforcement Learning Agents [10.413185820687021]
The Godot RL Agents interface allows the design, creation and learning of agent behaviors in challenging 2D and 3D environments.
The framework is a versatile tool that allows researchers and game designers the ability to create environments with discrete, continuous and mixed action spaces.
arXiv Detail & Related papers (2021-12-07T11:24:34Z) - Out of the Box: Embodied Navigation in the Real World [45.97756658635314]
We show how to transfer knowledge acquired in simulation into the real world.
We deploy our models on a LoCoBot equipped with a single Intel RealSense camera.
Our experiments indicate that it is possible to achieve satisfying results when deploying the obtained model in the real world.
arXiv Detail & Related papers (2021-05-12T18:00:14Z) - Deep Policy Networks for NPC Behaviors that Adapt to Changing Design
Parameters in Roguelike Games [137.86426963572214]
Turn-based strategy games like Roguelikes, for example, present unique challenges to Deep Reinforcement Learning (DRL)
We propose two network architectures to better handle complex categorical state spaces and to mitigate the need for retraining forced by design decisions.
arXiv Detail & Related papers (2020-12-07T08:47:25Z) - DeepCrawl: Deep Reinforcement Learning for Turn-based Strategy Games [137.86426963572214]
We introduce DeepCrawl, a fully-playable Roguelike prototype for iOS and Android in which all agents are controlled by policy networks trained using Deep Reinforcement Learning (DRL)
Our aim is to understand whether recent advances in DRL can be used to develop convincing behavioral models for non-player characters in videogames.
arXiv Detail & Related papers (2020-12-03T13:53:29Z) - The NetHack Learning Environment [79.06395964379107]
We present the NetHack Learning Environment (NLE), a procedurally generated rogue-like environment for Reinforcement Learning research.
We argue that NetHack is sufficiently complex to drive long-term research on problems such as exploration, planning, skill acquisition, and language-conditioned RL.
We demonstrate empirical success for early stages of the game using a distributed Deep RL baseline and Random Network Distillation exploration.
arXiv Detail & Related papers (2020-06-24T14:12:56Z) - Learning to Simulate Dynamic Environments with GameGAN [109.25308647431952]
In this paper, we aim to learn a simulator by simply watching an agent interact with an environment.
We introduce GameGAN, a generative model that learns to visually imitate a desired game by ingesting screenplay and keyboard actions during training.
arXiv Detail & Related papers (2020-05-25T14:10:17Z) - Neural MMO v1.3: A Massively Multiagent Game Environment for Training
and Evaluating Neural Networks [48.5733173329785]
We present Neural MMO, a massively multiagent game environment inspired by MMOs.
We discuss our progress on two more general challenges in multiagent systems engineering for AI research: distributed infrastructure and game IO.
arXiv Detail & Related papers (2020-01-31T18:50:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.