A Text-to-Game Engine for UGC-Based Role-Playing Games
- URL: http://arxiv.org/abs/2407.08195v2
- Date: Sat, 11 Jan 2025 07:53:25 GMT
- Title: A Text-to-Game Engine for UGC-Based Role-Playing Games
- Authors: Lei Zhang, Xuezheng Peng, Shuyi Yang, Feiyang Wang,
- Abstract summary: This paper introduces a novel framework for a text-to-game engine that leverages foundation models to transform simple textual inputs into intricate, multi-modal RPG experiences.<n>The engine dynamically generates game narratives, integrating text, visuals, and mechanics, while adapting characters, environments, and gameplay in realtime based on player interactions.
- Score: 6.5715027492220734
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The transition from professionally generated content (PGC) to user-generated content (UGC) has reshaped various media formats, encompassing formats such as text and video. With rapid advancements in generative AI, a similar transformation is set to redefine the gaming industry, particularly within the domain of role-playing games (RPGs). This paper introduces a novel framework for a text-to-game engine that leverages foundation models to transform simple textual inputs into intricate, multi-modal RPG experiences. The engine dynamically generates game narratives, integrating text, visuals, and mechanics, while adapting characters, environments, and gameplay in realtime based on player interactions. To evaluate and demonstrate the feasibility and versatility of this framework, we developed the 'Zagii' game engine. Zagii has successfully powered hundreds of RPG games across diverse genres and facilitated tens of thousands of online gameplay sessions, showcasing its scalability and adaptability. These results highlight the framework's effectiveness and its potential to foster a more open and democratized approach to game development. Our work underscores the transformative role of generative AI in reshaping the gaming lifecycle and advancing the boundaries of interactive entertainment.
Related papers
- AnimeGamer: Infinite Anime Life Simulation with Next Game State Prediction [58.240114139186275]
Recently, a pioneering approach for infinite anime life simulation employs large language models (LLMs) to translate multi-turn text dialogues into language instructions for image generation.
We propose AnimeGamer, which is built upon Multimodal Large Language Models (MLLMs) to generate each game state.
We introduce novel action-aware multimodal representations to represent animation shots, which can be decoded into high-quality video clips.
arXiv Detail & Related papers (2025-04-01T17:57:18Z) - Position: Interactive Generative Video as Next-Generation Game Engine [32.7449148483466]
We propose Interactive Generative Video (IGV) as the foundation for Generative Game Engines (GGE)
IGV's unique strengths include unlimited high-quality content synthesis, physics-aware world modeling, user-controlled interactivity, long-term memory capabilities, and causal reasoning.
Our work charts a new course for game development in the AI era, envisioning a future where AI-powered generative systems fundamentally reshape how games are created and experienced.
arXiv Detail & Related papers (2025-03-21T17:59:22Z) - Static Vs. Agentic Game Master AI for Facilitating Solo Role-Playing Experiences [3.383857646639421]
This paper presents a game master AI for single-player role-playing games.
The AI is designed to deliver interactive text-based narratives and experiences typically associated with multiplayer tabletop games like Dungeons & Dragons.
arXiv Detail & Related papers (2025-02-26T19:42:22Z) - Unbounded: A Generative Infinite Game of Character Life Simulation [68.37260000219479]
We introduce the concept of a generative infinite game, a video game that transcends the traditional boundaries of finite, hard-coded systems by using generative models.
We leverage recent advances in generative AI to create Unbounded: a game of character life simulation that is fully encapsulated in generative models.
arXiv Detail & Related papers (2024-10-24T17:59:31Z) - Instruction-Driven Game Engine: A Poker Case Study [53.689520884467065]
The IDGE project aims to democratize game development by enabling a large language model to follow free-form game descriptions and generate game-play processes.
We train the IDGE in a curriculum manner that progressively increases its exposure to complex scenarios.
Our initial progress lies in developing an IDGE for Poker, which not only supports a wide range of poker variants but also allows for highly individualized new poker games through natural language inputs.
arXiv Detail & Related papers (2024-10-17T11:16:27Z) - Open Role-Playing with Delta-Engines [50.86533710515017]
We propose a new style of game-play to bridge self-expression and role-playing: emphopen role-playing games (ORPGs)
Our vision is that, in the real world, we are individually similar when we are born, but we grow into unique ones as a result of the strongly different choices we make afterward.
In an ORPG, we empower players with freedom to decide their own growing curves through natural language inputs, ultimately becoming unique characters.
arXiv Detail & Related papers (2024-08-11T18:32:29Z) - GAVEL: Generating Games Via Evolution and Language Models [40.896938709468465]
We explore the generation of novel games in the Ludii game description language.
We train a model that intelligently mutates and recombines games and mechanics expressed as code.
A sample of the generated games are available to play online through the Ludii portal.
arXiv Detail & Related papers (2024-07-12T16:08:44Z) - Instruction-Driven Game Engines on Large Language Models [59.280666591243154]
The IDGE project aims to democratize game development by enabling a large language model to follow free-form game rules.
We train the IDGE in a curriculum manner that progressively increases the model's exposure to complex scenarios.
Our initial progress lies in developing an IDGE for Poker, a universally cherished card game.
arXiv Detail & Related papers (2024-03-30T08:02:16Z) - Towards a Holodeck-style Simulation Game [40.044978986425676]
Infinitia uses generative image and language models to reshape all aspects of the setting and NPCs based on a short description from the player.
Infinitia is implemented in the Unity engine with a server-client architecture.
It uses a multiplayer framework to allow humans to be present and interact in the simulation.
arXiv Detail & Related papers (2023-08-22T19:19:19Z) - Promptable Game Models: Text-Guided Game Simulation via Masked Diffusion
Models [68.85478477006178]
We present a Promptable Game Model (PGM) for neural video game simulators.
It allows a user to play the game by prompting it with high- and low-level action sequences.
Most captivatingly, our PGM unlocks the director's mode, where the game is played by specifying goals for the agents in the form of a prompt.
Our method significantly outperforms existing neural video game simulators in terms of rendering quality and unlocks applications beyond the capabilities of the current state of the art.
arXiv Detail & Related papers (2023-03-23T17:43:17Z) - Multi-Game Decision Transformers [49.257185338595434]
We show that a single transformer-based model can play a suite of up to 46 Atari games simultaneously at close-to-human performance.
We compare several approaches in this multi-game setting, such as online and offline RL methods and behavioral cloning.
We find that our Multi-Game Decision Transformer models offer the best scalability and performance.
arXiv Detail & Related papers (2022-05-30T16:55:38Z) - Entity Embedding as Game Representation [0.9645196221785693]
We present an autoencoder for deriving what we call "entity embeddings"
In this paper we introduce the learned representation, along with some evidence towards its quality and future utility.
arXiv Detail & Related papers (2020-10-04T21:16:45Z) - Learning to Simulate Dynamic Environments with GameGAN [109.25308647431952]
In this paper, we aim to learn a simulator by simply watching an agent interact with an environment.
We introduce GameGAN, a generative model that learns to visually imitate a desired game by ingesting screenplay and keyboard actions during training.
arXiv Detail & Related papers (2020-05-25T14:10:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.