Tile Embedding: A General Representation for Procedural Level Generation
via Machine Learning
- URL: http://arxiv.org/abs/2110.03181v1
- Date: Thu, 7 Oct 2021 04:48:48 GMT
- Title: Tile Embedding: A General Representation for Procedural Level Generation
via Machine Learning
- Authors: Mrunal Jadhav and Matthew Guzdial
- Abstract summary: We present tile embeddings, a unified, affordance-rich representation for tile-based 2D games.
We employ autoencoders trained on the visual and semantic information of tiles from a set of existing, human-annotated games.
We evaluate this representation on its ability to predict affordances for unseen tiles, and to serve as a PLGML representation for annotated and unannotated games.
- Score: 1.590611306750623
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, Procedural Level Generation via Machine Learning (PLGML)
techniques have been applied to generate game levels with machine learning.
These approaches rely on human-annotated representations of game levels.
Creating annotated datasets for games requires domain knowledge and is
time-consuming. Hence, though a large number of video games exist, annotated
datasets are curated only for a small handful. Thus current PLGML techniques
have been explored in limited domains, with Super Mario Bros. as the most
common example. To address this problem, we present tile embeddings, a unified,
affordance-rich representation for tile-based 2D games. To learn this
embedding, we employ autoencoders trained on the visual and semantic
information of tiles from a set of existing, human-annotated games. We evaluate
this representation on its ability to predict affordances for unseen tiles, and
to serve as a PLGML representation for annotated and unannotated games.
Related papers
- Grammar-based Game Description Generation using Large Language Models [12.329521804287259]
We introduce the grammar of game descriptions, which effectively structures the game design space, into the reasoning process.
Our experiments demonstrate that this approach performs well in generating game descriptions.
arXiv Detail & Related papers (2024-07-24T16:36:02Z) - Rethinking Visual Prompting for Multimodal Large Language Models with External Knowledge [76.45868419402265]
multimodal large language models (MLLMs) have made significant strides by training on vast high-quality image-text datasets.
However, the inherent difficulty in explicitly conveying fine-grained or spatially dense information in text, such as masks, poses a challenge for MLLMs.
This paper proposes a new visual prompt approach to integrate fine-grained external knowledge, gleaned from specialized vision models, into MLLMs.
arXiv Detail & Related papers (2024-07-05T17:43:30Z) - ClawMachine: Fetching Visual Tokens as An Entity for Referring and Grounding [67.63933036920012]
Existing methods, including proxy encoding and geometry encoding, incorporate additional syntax to encode the object's location.
This study presents ClawMachine, offering a new methodology that notates an entity directly using the visual tokens.
ClawMachine unifies visual referring and grounding into an auto-regressive format and learns with a decoder-only architecture.
arXiv Detail & Related papers (2024-06-17T08:39:16Z) - Instruction-Driven Game Engines on Large Language Models [59.280666591243154]
The IDGE project aims to democratize game development by enabling a large language model to follow free-form game rules.
We train the IDGE in a curriculum manner that progressively increases the model's exposure to complex scenarios.
Our initial progress lies in developing an IDGE for Poker, a universally cherished card game.
arXiv Detail & Related papers (2024-03-30T08:02:16Z) - Learning to Prompt with Text Only Supervision for Vision-Language Models [107.282881515667]
One branch of methods adapts CLIP by learning prompts using visual information.
An alternative approach resorts to training-free methods by generating class descriptions from large language models.
We propose to combine the strengths of both streams by learning prompts using only text data.
arXiv Detail & Related papers (2024-01-04T18:59:49Z) - Game Level Blending using a Learned Level Representation [3.3946853660795884]
We present a novel approach to game level blending that employs Clustering-based Tile Embeddings (CTE)
CTE represents game level tiles as a continuous vector representation, unifying their visual, contextual, and behavioral information.
We apply this approach to two classic Nintendo games, Lode Runner and The Legend of Zelda.
arXiv Detail & Related papers (2023-06-29T03:55:09Z) - Joint Level Generation and Translation Using Gameplay Videos [0.9645196221785693]
Procedural Content Generation via Machine Learning (PCGML) faces a significant hurdle that sets it apart from other fields, such as image or text generation.
Many existing methods for procedural level generation via machine learning require a secondary representation besides level images.
We develop a novel multi-tail framework that learns to perform simultaneous level translation and generation.
arXiv Detail & Related papers (2023-06-29T03:46:44Z) - Harnessing Explanations: LLM-to-LM Interpreter for Enhanced
Text-Attributed Graph Representation Learning [51.90524745663737]
A key innovation is our use of explanations as features, which can be used to boost GNN performance on downstream tasks.
Our method achieves state-of-the-art results on well-established TAG datasets.
Our method significantly speeds up training, achieving a 2.88 times improvement over the closest baseline on ogbn-arxiv.
arXiv Detail & Related papers (2023-05-31T03:18:03Z) - Promptable Game Models: Text-Guided Game Simulation via Masked Diffusion
Models [68.85478477006178]
We present a Promptable Game Model (PGM) for neural video game simulators.
It allows a user to play the game by prompting it with high- and low-level action sequences.
Most captivatingly, our PGM unlocks the director's mode, where the game is played by specifying goals for the agents in the form of a prompt.
Our method significantly outperforms existing neural video game simulators in terms of rendering quality and unlocks applications beyond the capabilities of the current state of the art.
arXiv Detail & Related papers (2023-03-23T17:43:17Z) - Learning Task-Independent Game State Representations from Unlabeled
Images [2.570570340104555]
Self-supervised learning (SSL) techniques have been widely used to learn compact and informative representations from complex data.
This paper investigates whether SSL methods can be leveraged for the task of learning accurate state representations of games.
arXiv Detail & Related papers (2022-06-13T21:37:58Z) - Level generation and style enhancement -- deep learning for game
development overview [0.0]
We present seven approaches to create level maps, each using statistical methods, machine learning, or deep learning.
We aim to present new possibilities for game developers and level artists.
arXiv Detail & Related papers (2021-07-15T15:24:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.