Instruction-Driven Game Engines on Large Language Models
- URL: http://arxiv.org/abs/2404.00276v4
- Date: Fri, 23 Aug 2024 04:06:41 GMT
- Title: Instruction-Driven Game Engines on Large Language Models
- Authors: Hongqiu Wu, Yan Wang, Xingyuan Liu, Hai Zhao, Min Zhang,
- Abstract summary: The IDGE project aims to democratize game development by enabling a large language model to follow free-form game rules.
We train the IDGE in a curriculum manner that progressively increases the model's exposure to complex scenarios.
Our initial progress lies in developing an IDGE for Poker, a universally cherished card game.
- Score: 59.280666591243154
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Instruction-Driven Game Engine (IDGE) project aims to democratize game development by enabling a large language model (LLM) to follow free-form game rules and autonomously generate game-play processes. The IDGE allows users to create games by issuing simple natural language instructions, which significantly lowers the barrier for game development. We approach the learning process for IDGEs as a Next State Prediction task, wherein the model autoregressively predicts in-game states given player actions. It is a challenging task because the computation of in-game states must be precise; otherwise, slight errors could disrupt the game-play. To address this, we train the IDGE in a curriculum manner that progressively increases the model's exposure to complex scenarios. Our initial progress lies in developing an IDGE for Poker, a universally cherished card game. The engine we've designed not only supports a wide range of poker variants but also allows for high customization of rules through natural language inputs. Furthermore, it also favors rapid prototyping of new games from minimal samples, proposing an innovative paradigm in game development that relies on minimal prompt and data engineering. This work lays the groundwork for future advancements in instruction-driven game creation, potentially transforming how games are designed and played.
Related papers
- Instruction-Driven Game Engine: A Poker Case Study [53.689520884467065]
The IDGE project aims to democratize game development by enabling a large language model to follow free-form game descriptions and generate game-play processes.
We train the IDGE in a curriculum manner that progressively increases its exposure to complex scenarios.
Our initial progress lies in developing an IDGE for Poker, which not only supports a wide range of poker variants but also allows for highly individualized new poker games through natural language inputs.
arXiv Detail & Related papers (2024-10-17T11:16:27Z) - Grammar-based Game Description Generation using Large Language Models [12.329521804287259]
We introduce the grammar of game descriptions, which effectively structures the game design space, into the reasoning process.
Our experiments demonstrate that this approach performs well in generating game descriptions.
arXiv Detail & Related papers (2024-07-24T16:36:02Z) - GAVEL: Generating Games Via Evolution and Language Models [40.896938709468465]
We explore the generation of novel games in the Ludii game description language.
We train a model that intelligently mutates and recombines games and mechanics expressed as code.
A sample of the generated games are available to play online through the Ludii portal.
arXiv Detail & Related papers (2024-07-12T16:08:44Z) - On Automating Video Game Regression Testing by Planning and Learning [3.746904317622708]
We propose a method and workflow for automating regression testing of certain video game aspects.
The basic idea is to use detailed game logs and incremental action model learning techniques to maintain a formal model.
This paper presents the first step towards minimizing or even eliminating the need for a modeling expert in the workflow.
arXiv Detail & Related papers (2024-02-16T14:28:25Z) - Promptable Game Models: Text-Guided Game Simulation via Masked Diffusion
Models [68.85478477006178]
We present a Promptable Game Model (PGM) for neural video game simulators.
It allows a user to play the game by prompting it with high- and low-level action sequences.
Most captivatingly, our PGM unlocks the director's mode, where the game is played by specifying goals for the agents in the form of a prompt.
Our method significantly outperforms existing neural video game simulators in terms of rendering quality and unlocks applications beyond the capabilities of the current state of the art.
arXiv Detail & Related papers (2023-03-23T17:43:17Z) - Infusing Commonsense World Models with Graph Knowledge [89.27044249858332]
We study the setting of generating narratives in an open world text adventure game.
A graph representation of the underlying game state can be used to train models that consume and output both grounded graph representations and natural language descriptions and actions.
arXiv Detail & Related papers (2023-01-13T19:58:27Z) - Learning Chess Blindfolded: Evaluating Language Models on State Tracking [69.3794549747725]
We consider the task of language modeling for the game of chess.
Unlike natural language, chess notations describe a simple, constrained, and deterministic domain.
We find that transformer language models can learn to track pieces and predict legal moves with high accuracy when trained solely on move sequences.
arXiv Detail & Related papers (2021-02-26T01:16:23Z) - Learning to Simulate Dynamic Environments with GameGAN [109.25308647431952]
In this paper, we aim to learn a simulator by simply watching an agent interact with an environment.
We introduce GameGAN, a generative model that learns to visually imitate a desired game by ingesting screenplay and keyboard actions during training.
arXiv Detail & Related papers (2020-05-25T14:10:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.