Game Level Blending using a Learned Level Representation
- URL: http://arxiv.org/abs/2306.16666v1
- Date: Thu, 29 Jun 2023 03:55:09 GMT
- Title: Game Level Blending using a Learned Level Representation
- Authors: Venkata Sai Revanth Atmakuri, Seth Cooper and Matthew Guzdial
- Abstract summary: We present a novel approach to game level blending that employs Clustering-based Tile Embeddings (CTE)
CTE represents game level tiles as a continuous vector representation, unifying their visual, contextual, and behavioral information.
We apply this approach to two classic Nintendo games, Lode Runner and The Legend of Zelda.
- Score: 3.3946853660795884
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Game level blending via machine learning, the process of combining features
of game levels to create unique and novel game levels using Procedural Content
Generation via Machine Learning (PCGML) techniques, has gained increasing
popularity in recent years. However, many existing techniques rely on
human-annotated level representations, which limits game level blending to a
limited number of annotated games. Even with annotated games, researchers often
need to author an additional shared representation to make blending possible.
In this paper, we present a novel approach to game level blending that employs
Clustering-based Tile Embeddings (CTE), a learned level representation
technique that can serve as a level representation for unannotated games and a
unified level representation across games without the need for human
annotation. CTE represents game level tiles as a continuous vector
representation, unifying their visual, contextual, and behavioral information.
We apply this approach to two classic Nintendo games, Lode Runner and The
Legend of Zelda. We run an evaluation comparing the CTE representation to a
common, human-annotated representation in the blending task and find that CTE
has comparable or better performance without the need for human annotation.
Related papers
- Spatial Action Unit Cues for Interpretable Deep Facial Expression Recognition [55.97779732051921]
State-of-the-art classifiers for facial expression recognition (FER) lack interpretability, an important feature for end-users.
A new learning strategy is proposed to explicitly incorporate AU cues into classifier training, allowing to train deep interpretable models.
Our new strategy is generic, and can be applied to any deep CNN- or transformer-based classifier without requiring any architectural change or significant additional training time.
arXiv Detail & Related papers (2024-10-01T10:42:55Z) - Guided Interpretable Facial Expression Recognition via Spatial Action Unit Cues [55.97779732051921]
A new learning strategy is proposed to explicitly incorporate au cues into classifier training.
We show that our strategy can improve layer-wise interpretability without degrading classification performance.
arXiv Detail & Related papers (2024-02-01T02:13:49Z) - CAPro: Webly Supervised Learning with Cross-Modality Aligned Prototypes [93.71909293023663]
Cross-modality Aligned Prototypes (CAPro) is a unified contrastive learning framework to learn visual representations with correct semantics.
CAPro achieves new state-of-the-art performance and exhibits robustness to open-set recognition.
arXiv Detail & Related papers (2023-10-15T07:20:22Z) - Towards General Game Representations: Decomposing Games Pixels into
Content and Style [2.570570340104555]
Learning pixel representations of games can benefit artificial intelligence across several downstream tasks.
This paper explores how generalizable pre-trained computer vision encoders can be for such tasks.
We employ a pre-trained Vision Transformer encoder and a decomposition technique based on game genres to obtain separate content and style embeddings.
arXiv Detail & Related papers (2023-07-20T17:53:04Z) - Joint Level Generation and Translation Using Gameplay Videos [0.9645196221785693]
Procedural Content Generation via Machine Learning (PCGML) faces a significant hurdle that sets it apart from other fields, such as image or text generation.
Many existing methods for procedural level generation via machine learning require a secondary representation besides level images.
We develop a novel multi-tail framework that learns to perform simultaneous level translation and generation.
arXiv Detail & Related papers (2023-06-29T03:46:44Z) - Promptable Game Models: Text-Guided Game Simulation via Masked Diffusion
Models [68.85478477006178]
We present a Promptable Game Model (PGM) for neural video game simulators.
It allows a user to play the game by prompting it with high- and low-level action sequences.
Most captivatingly, our PGM unlocks the director's mode, where the game is played by specifying goals for the agents in the form of a prompt.
Our method significantly outperforms existing neural video game simulators in terms of rendering quality and unlocks applications beyond the capabilities of the current state of the art.
arXiv Detail & Related papers (2023-03-23T17:43:17Z) - Tile Embedding: A General Representation for Procedural Level Generation
via Machine Learning [1.590611306750623]
We present tile embeddings, a unified, affordance-rich representation for tile-based 2D games.
We employ autoencoders trained on the visual and semantic information of tiles from a set of existing, human-annotated games.
We evaluate this representation on its ability to predict affordances for unseen tiles, and to serve as a PLGML representation for annotated and unannotated games.
arXiv Detail & Related papers (2021-10-07T04:48:48Z) - Grafit: Learning fine-grained image representations with coarse labels [114.17782143848315]
This paper tackles the problem of learning a finer representation than the one provided by training labels.
By jointly leveraging the coarse labels and the underlying fine-grained latent space, it significantly improves the accuracy of category-level retrieval methods.
arXiv Detail & Related papers (2020-11-25T19:06:26Z) - Illuminating Mario Scenes in the Latent Space of a Generative
Adversarial Network [11.055580854275474]
We show how designers may specify gameplay measures to our system and extract high-quality (playable) levels with a diverse range of level mechanics.
An online user study shows how the different mechanics of the automatically generated levels affect subjective ratings of their perceived difficulty and appearance.
arXiv Detail & Related papers (2020-07-11T03:38:06Z) - Controllable Level Blending between Games using Variational Autoencoders [6.217860411034386]
We train a VAE on level data from Super Mario Bros. and Kid Icarus, enabling it to capture the latent space spanning both games.
We then use this space to generate level segments that combine properties of levels from both games.
We argue that these affordances make the VAE-based approach especially suitable for co-creative level design.
arXiv Detail & Related papers (2020-02-27T01:38:35Z) - Learning Dynamic Belief Graphs to Generalize on Text-Based Games [55.59741414135887]
Playing text-based games requires skills in processing natural language and sequential decision making.
In this work, we investigate how an agent can plan and generalize in text-based games using graph-structured representations learned end-to-end from raw text.
arXiv Detail & Related papers (2020-02-21T04:38:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.