Grammar-based Game Description Generation using Large Language Models
- URL: http://arxiv.org/abs/2407.17404v1
- Date: Wed, 24 Jul 2024 16:36:02 GMT
- Title: Grammar-based Game Description Generation using Large Language Models
- Authors: Tsunehiko Tanaka, Edgar Simo-Serra,
- Abstract summary: We introduce the grammar of game descriptions, which effectively structures the game design space, into the reasoning process.
Our experiments demonstrate that this approach performs well in generating game descriptions.
- Score: 12.329521804287259
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: To lower the barriers to game design development, automated game design, which generates game designs through computational processes, has been explored. In automated game design, machine learning-based techniques such as evolutionary algorithms have achieved success. Benefiting from the remarkable advancements in deep learning, applications in computer vision and natural language processing have progressed in level generation. However, due to the limited amount of data in game design, the application of deep learning has been insufficient for tasks such as game description generation. To pioneer a new approach for handling limited data in automated game design, we focus on the in-context learning of large language models (LLMs). LLMs can capture the features of a task from a few demonstration examples and apply the capabilities acquired during pre-training. We introduce the grammar of game descriptions, which effectively structures the game design space, into the LLMs' reasoning process. Grammar helps LLMs capture the characteristics of the complex task of game description generation. Furthermore, we propose a decoding method that iteratively improves the generated output by leveraging the grammar. Our experiments demonstrate that this approach performs well in generating game descriptions.
Related papers
- Understanding Players as if They Are Talking to the Game in a Customized Language: A Pilot Study [3.4333699338998693]
This pilot study explores the application of language models (LMs) to model game event sequences.
We transform raw event data into textual sequences and pretraining a Longformer model on this data.
The results demonstrate the potential of self-supervised LMs in enhancing game design and personalization without relying on ground-truth labels.
arXiv Detail & Related papers (2024-10-24T09:59:10Z) - ChatPCG: Large Language Model-Driven Reward Design for Procedural Content Generation [3.333383360927007]
This paper proposes ChatPCG, a large language model (LLM)-driven reward design framework.
It leverages human-level insights, coupled with game expertise, to generate rewards tailored to specific game features automatically.
ChatPCG is integrated with deep reinforcement learning, demonstrating its potential for multiplayer game content generation tasks.
arXiv Detail & Related papers (2024-06-07T08:18:42Z) - Instruction-Driven Game Engines on Large Language Models [59.280666591243154]
The IDGE project aims to democratize game development by enabling a large language model to follow free-form game rules.
We train the IDGE in a curriculum manner that progressively increases the model's exposure to complex scenarios.
Our initial progress lies in developing an IDGE for Poker, a universally cherished card game.
arXiv Detail & Related papers (2024-03-30T08:02:16Z) - On Automating Video Game Regression Testing by Planning and Learning [3.746904317622708]
We propose a method and workflow for automating regression testing of certain video game aspects.
The basic idea is to use detailed game logs and incremental action model learning techniques to maintain a formal model.
This paper presents the first step towards minimizing or even eliminating the need for a modeling expert in the workflow.
arXiv Detail & Related papers (2024-02-16T14:28:25Z) - Interactive Planning Using Large Language Models for Partially
Observable Robotics Tasks [54.60571399091711]
Large Language Models (LLMs) have achieved impressive results in creating robotic agents for performing open vocabulary tasks.
We present an interactive planning technique for partially observable tasks using LLMs.
arXiv Detail & Related papers (2023-12-11T22:54:44Z) - SPRING: Studying the Paper and Reasoning to Play Games [102.5587155284795]
We propose a novel approach, SPRING, to read the game's original academic paper and use the knowledge learned to reason and play the game through a large language model (LLM)
In experiments, we study the quality of in-context "reasoning" induced by different forms of prompts under the setting of the Crafter open-world environment.
Our experiments suggest that LLMs, when prompted with consistent chain-of-thought, have great potential in completing sophisticated high-level trajectories.
arXiv Detail & Related papers (2023-05-24T18:14:35Z) - Infusing Commonsense World Models with Graph Knowledge [89.27044249858332]
We study the setting of generating narratives in an open world text adventure game.
A graph representation of the underlying game state can be used to train models that consume and output both grounded graph representations and natural language descriptions and actions.
arXiv Detail & Related papers (2023-01-13T19:58:27Z) - Tile Embedding: A General Representation for Procedural Level Generation
via Machine Learning [1.590611306750623]
We present tile embeddings, a unified, affordance-rich representation for tile-based 2D games.
We employ autoencoders trained on the visual and semantic information of tiles from a set of existing, human-annotated games.
We evaluate this representation on its ability to predict affordances for unseen tiles, and to serve as a PLGML representation for annotated and unannotated games.
arXiv Detail & Related papers (2021-10-07T04:48:48Z) - Teach me to play, gamer! Imitative learning in computer games via
linguistic description of complex phenomena and decision tree [55.41644538483948]
We present a new machine learning model by imitation based on the linguistic description of complex phenomena.
The method can be a good alternative to design and implement the behaviour of intelligent agents in video game development.
arXiv Detail & Related papers (2021-01-06T21:14:10Z) - Deep Reinforcement Learning with Stacked Hierarchical Attention for
Text-based Games [64.11746320061965]
We study reinforcement learning for text-based games, which are interactive simulations in the context of natural language.
We aim to conduct explicit reasoning with knowledge graphs for decision making, so that the actions of an agent are generated and supported by an interpretable inference procedure.
We extensively evaluate our method on a number of man-made benchmark games, and the experimental results demonstrate that our method performs better than existing text-based agents.
arXiv Detail & Related papers (2020-10-22T12:40:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.