Meta Arcade: A Configurable Environment Suite for Meta-Learning
- URL: http://arxiv.org/abs/2112.00583v1
- Date: Wed, 1 Dec 2021 15:54:38 GMT
- Title: Meta Arcade: A Configurable Environment Suite for Meta-Learning
- Authors: Edward W. Staley, Chace Ashcraft, Benjamin Stoler, Jared Markowitz,
Gautam Vallabha, Christopher Ratto, Kapil D. Katyal
- Abstract summary: We present Meta Arcade, a tool to easily define and configure custom 2D arcade games that share common visuals, state spaces, action spaces, game components, and scoring mechanisms.
We include a suite of 24 predefined games that collectively illustrate the possibilities of this framework and discuss how these games can be configured for research applications.
- Score: 4.962850553353402
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most approaches to deep reinforcement learning (DRL) attempt to solve a
single task at a time. As a result, most existing research benchmarks consist
of individual games or suites of games that have common interfaces but little
overlap in their perceptual features, objectives, or reward structures. To
facilitate research into knowledge transfer among trained agents (e.g. via
multi-task and meta-learning), more environment suites that provide
configurable tasks with enough commonality to be studied collectively are
needed. In this paper we present Meta Arcade, a tool to easily define and
configure custom 2D arcade games that share common visuals, state spaces,
action spaces, game components, and scoring mechanisms. Meta Arcade differs
from prior environments in that both task commonality and configurability are
prioritized: entire sets of games can be constructed from common elements, and
these elements are adjustable through exposed parameters. We include a suite of
24 predefined games that collectively illustrate the possibilities of this
framework and discuss how these games can be configured for research
applications. We provide several experiments that illustrate how Meta Arcade
could be used, including single-task benchmarks of predefined games, sample
curriculum-based approaches that change game parameters over a set schedule,
and an exploration of transfer learning between games.
Related papers
- Accelerate Multi-Agent Reinforcement Learning in Zero-Sum Games with
Subgame Curriculum Learning [65.36326734799587]
We present a novel subgame curriculum learning framework for zero-sum games.
It adopts an adaptive initial state distribution by resetting agents to some previously visited states.
We derive a subgame selection metric that approximates the squared distance to NE values.
arXiv Detail & Related papers (2023-10-07T13:09:37Z) - Towards General Game Representations: Decomposing Games Pixels into
Content and Style [2.570570340104555]
Learning pixel representations of games can benefit artificial intelligence across several downstream tasks.
This paper explores how generalizable pre-trained computer vision encoders can be for such tasks.
We employ a pre-trained Vision Transformer encoder and a decomposition technique based on game genres to obtain separate content and style embeddings.
arXiv Detail & Related papers (2023-07-20T17:53:04Z) - SPRING: Studying the Paper and Reasoning to Play Games [102.5587155284795]
We propose a novel approach, SPRING, to read the game's original academic paper and use the knowledge learned to reason and play the game through a large language model (LLM)
In experiments, we study the quality of in-context "reasoning" induced by different forms of prompts under the setting of the Crafter open-world environment.
Our experiments suggest that LLMs, when prompted with consistent chain-of-thought, have great potential in completing sophisticated high-level trajectories.
arXiv Detail & Related papers (2023-05-24T18:14:35Z) - WILD-SCAV: Benchmarking FPS Gaming AI on Unity3D-based Environments [5.020816812380825]
Recent advances in deep reinforcement learning (RL) have demonstrated complex decision-making capabilities in simulation environments.
However, they are hardly to more complicated problems, due to the lack of complexity and variations in the environments they are trained and tested on.
We developed WILD-SCAV, a powerful and open-world environment based on a 3D open-world FPS game to bridge the gap.
It provides realistic 3D environments of variable complexity, various tasks, and multiple modes of interaction, where agents can learn to perceive 3D environments, navigate and plan, compete and cooperate in a human-like manner
arXiv Detail & Related papers (2022-10-14T13:39:41Z) - Spatial State-Action Features for General Games [5.849736173068868]
We formulate a design and efficient implementation of spatial state-action features for general games.
These are patterns that can be trained to incentivise or disincentivise actions based on whether or not they match variables of the state in a local area.
We propose an efficient approach for evaluating active features for any given set of features.
arXiv Detail & Related papers (2022-01-17T13:34:04Z) - Combining Off and On-Policy Training in Model-Based Reinforcement
Learning [77.34726150561087]
We propose a way to obtain off-policy targets using data from simulated games in MuZero.
Our results show that these targets speed up the training process and lead to faster convergence and higher rewards.
arXiv Detail & Related papers (2021-02-24T10:47:26Z) - Griddly: A platform for AI research in games [0.0]
We present Griddly as a new platform for Game AI research.
Griddly provides a unique combination of highly customizable games, different observer types and an efficient C++ core engine.
We present a series of baseline experiments to study the effect of different observation configurations and generalization ability of RL agents.
arXiv Detail & Related papers (2020-11-12T13:23:31Z) - Learning to Simulate Dynamic Environments with GameGAN [109.25308647431952]
In this paper, we aim to learn a simulator by simply watching an agent interact with an environment.
We introduce GameGAN, a generative model that learns to visually imitate a desired game by ingesting screenplay and keyboard actions during training.
arXiv Detail & Related papers (2020-05-25T14:10:17Z) - Navigating the Landscape of Multiplayer Games [20.483315340460127]
We show how network measures applied to response graphs of large-scale games enable the creation of a landscape of games.
We illustrate our findings in domains ranging from canonical games to complex empirical games capturing the performance of trained agents pitted against one another.
arXiv Detail & Related papers (2020-05-04T16:58:17Z) - MTL-NAS: Task-Agnostic Neural Architecture Search towards
General-Purpose Multi-Task Learning [71.90902837008278]
We propose to incorporate neural architecture search (NAS) into general-purpose multi-task learning (GP-MTL)
In order to adapt to different task combinations, we disentangle the GP-MTL networks into single-task backbones.
We also propose a novel single-shot gradient-based search algorithm that closes the performance gap between the searched architectures.
arXiv Detail & Related papers (2020-03-31T09:49:14Z) - Exploration Based Language Learning for Text-Based Games [72.30525050367216]
This work presents an exploration and imitation-learning-based agent capable of state-of-the-art performance in playing text-based computer games.
Text-based computer games describe their world to the player through natural language and expect the player to interact with the game using text.
These games are of interest as they can be seen as a testbed for language understanding, problem-solving, and language generation by artificial agents.
arXiv Detail & Related papers (2020-01-24T03:03:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.