Towards General Game Representations: Decomposing Games Pixels into
Content and Style
- URL: http://arxiv.org/abs/2307.11141v1
- Date: Thu, 20 Jul 2023 17:53:04 GMT
- Title: Towards General Game Representations: Decomposing Games Pixels into
Content and Style
- Authors: Chintan Trivedi, Konstantinos Makantasis, Antonios Liapis and Georgios
N. Yannakakis
- Abstract summary: Learning pixel representations of games can benefit artificial intelligence across several downstream tasks.
This paper explores how generalizable pre-trained computer vision encoders can be for such tasks.
We employ a pre-trained Vision Transformer encoder and a decomposition technique based on game genres to obtain separate content and style embeddings.
- Score: 2.570570340104555
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: On-screen game footage contains rich contextual information that players
process when playing and experiencing a game. Learning pixel representations of
games can benefit artificial intelligence across several downstream tasks
including game-playing agents, procedural content generation, and player
modelling. The generalizability of these methods, however, remains a challenge,
as learned representations should ideally be shared across games with similar
game mechanics. This could allow, for instance, game-playing agents trained on
one game to perform well in similar games with no re-training. This paper
explores how generalizable pre-trained computer vision encoders can be for such
tasks, by decomposing the latent space into content embeddings and style
embeddings. The goal is to minimize the domain gap between games of the same
genre when it comes to game content critical for downstream tasks, and ignore
differences in graphical style. We employ a pre-trained Vision Transformer
encoder and a decomposition technique based on game genres to obtain separate
content and style embeddings. Our findings show that the decomposed embeddings
achieve style invariance across multiple games while still maintaining strong
content extraction capabilities. We argue that the proposed decomposition of
content and style offers better generalization capacities across game
environments independently of the downstream task.
Related papers
- Promptable Game Models: Text-Guided Game Simulation via Masked Diffusion
Models [68.85478477006178]
We present a Promptable Game Model (PGM) for neural video game simulators.
It allows a user to play the game by prompting it with high- and low-level action sequences.
Most captivatingly, our PGM unlocks the director's mode, where the game is played by specifying goals for the agents in the form of a prompt.
Our method significantly outperforms existing neural video game simulators in terms of rendering quality and unlocks applications beyond the capabilities of the current state of the art.
arXiv Detail & Related papers (2023-03-23T17:43:17Z) - Game State Learning via Game Scene Augmentation [2.570570340104555]
We introduce a new game scene augmentation technique -- named GameCLR -- that takes advantage of the game-engine to define and synthesize specific, highly-controlled renderings of different game states.
Our results suggest that GameCLR can infer the game's state information from game footage more accurately compared to the baseline.
arXiv Detail & Related papers (2022-07-04T09:40:14Z) - Contrastive Learning of Generalized Game Representations [2.323282558557423]
Representing games through their pixels offers a promising approach for building general-purpose and versatile game models.
While games are not merely images, neural network models trained on game pixels often capture differences of the visual style of the image rather than the content of the game.
In this paper we build on recent advances in contrastive learning and showcase its benefits for representation learning in games.
arXiv Detail & Related papers (2021-06-18T11:17:54Z) - MarioNette: Self-Supervised Sprite Learning [67.51317291061115]
We propose a deep learning approach for obtaining a graphically disentangled representation of recurring elements.
By jointly learning a dictionary of texture patches and training a network that places them onto a canvas, we effectively deconstruct sprite-based content into a sparse, consistent, and interpretable representation.
arXiv Detail & Related papers (2021-04-29T17:59:01Z) - Generating Diverse and Competitive Play-Styles for Strategy Games [58.896302717975445]
We propose Portfolio Monte Carlo Tree Search with Progressive Unpruning for playing a turn-based strategy game (Tribes)
We show how it can be parameterized so a quality-diversity algorithm (MAP-Elites) is used to achieve different play-styles while keeping a competitive level of play.
Our results show that this algorithm is capable of achieving these goals even for an extensive collection of game levels beyond those used for training.
arXiv Detail & Related papers (2021-04-17T20:33:24Z) - Deep Policy Networks for NPC Behaviors that Adapt to Changing Design
Parameters in Roguelike Games [137.86426963572214]
Turn-based strategy games like Roguelikes, for example, present unique challenges to Deep Reinforcement Learning (DRL)
We propose two network architectures to better handle complex categorical state spaces and to mitigate the need for retraining forced by design decisions.
arXiv Detail & Related papers (2020-12-07T08:47:25Z) - Entity Embedding as Game Representation [0.9645196221785693]
We present an autoencoder for deriving what we call "entity embeddings"
In this paper we introduce the learned representation, along with some evidence towards its quality and future utility.
arXiv Detail & Related papers (2020-10-04T21:16:45Z) - Learning to Simulate Dynamic Environments with GameGAN [109.25308647431952]
In this paper, we aim to learn a simulator by simply watching an agent interact with an environment.
We introduce GameGAN, a generative model that learns to visually imitate a desired game by ingesting screenplay and keyboard actions during training.
arXiv Detail & Related papers (2020-05-25T14:10:17Z) - Navigating the Landscape of Multiplayer Games [20.483315340460127]
We show how network measures applied to response graphs of large-scale games enable the creation of a landscape of games.
We illustrate our findings in domains ranging from canonical games to complex empirical games capturing the performance of trained agents pitted against one another.
arXiv Detail & Related papers (2020-05-04T16:58:17Z) - Benchmarking End-to-End Behavioural Cloning on Video Games [5.863352129133669]
We study the general applicability of behavioural cloning on twelve video games, including six modern video games (published after 2010)
Our results show that these agents cannot match humans in raw performance but do learn basic dynamics and rules.
We also demonstrate how the quality of the data matters, and how recording data from humans is subject to a state-action mismatch, due to human reflexes.
arXiv Detail & Related papers (2020-04-02T13:31:51Z) - Exploration Based Language Learning for Text-Based Games [72.30525050367216]
This work presents an exploration and imitation-learning-based agent capable of state-of-the-art performance in playing text-based computer games.
Text-based computer games describe their world to the player through natural language and expect the player to interact with the game using text.
These games are of interest as they can be seen as a testbed for language understanding, problem-solving, and language generation by artificial agents.
arXiv Detail & Related papers (2020-01-24T03:03:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.