STORY2GAME: Generating (Almost) Everything in an Interactive Fiction Game
- URL: http://arxiv.org/abs/2505.03547v1
- Date: Tue, 06 May 2025 14:00:41 GMT
- Title: STORY2GAME: Generating (Almost) Everything in an Interactive Fiction Game
- Authors: Eric Zhou, Shreyas Basavatia, Moontashir Siam, Zexin Chen, Mark O. Riedl,
- Abstract summary: We introduce STORY2GAME, a novel approach to generate text-based interactive fiction games.<n>It starts by generating a story, populates the world, and builds the code for actions in a game engine that enables the story to play out interactively.<n>We evaluate the success rate of action code generation with respect to whether a player can interactively play through the entire generated story.
- Score: 15.427907377465685
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce STORY2GAME, a novel approach to using Large Language Models to generate text-based interactive fiction games that starts by generating a story, populates the world, and builds the code for actions in a game engine that enables the story to play out interactively. Whereas a given set of hard-coded actions can artificially constrain story generation, the ability to generate actions means the story generation process can be more open-ended but still allow for experiences that are grounded in a game state. The key to successful action generation is to use LLM-generated preconditions and effects of actions in the stories as guides for what aspects of the game state must be tracked and changed by the game engine when a player performs an action. We also introduce a technique for dynamically generating new actions to accommodate the player's desire to perform actions that they think of that are not part of the story. Dynamic action generation may require on-the-fly updates to the game engine's state representation and revision of previously generated actions. We evaluate the success rate of action code generation with respect to whether a player can interactively play through the entire generated story.
Related papers
- AnimeGamer: Infinite Anime Life Simulation with Next Game State Prediction [58.240114139186275]
Recently, a pioneering approach for infinite anime life simulation employs large language models (LLMs) to translate multi-turn text dialogues into language instructions for image generation.<n>We propose AnimeGamer, which is built upon Multimodal Large Language Models (MLLMs) to generate each game state.<n>We introduce novel action-aware multimodal representations to represent animation shots, which can be decoded into high-quality video clips.
arXiv Detail & Related papers (2025-04-01T17:57:18Z) - GameFactory: Creating New Games with Generative Interactive Videos [32.98135338530966]
Generative videos have the potential to revolutionize game development by autonomously creating new content.<n>We present GameFactory, a framework for action-controlled scene-generalizable game video generation.<n> Experimental results demonstrate that GameFactory effectively generates open-domain action-controllable game videos.
arXiv Detail & Related papers (2025-01-14T18:57:21Z) - A Text-to-Game Engine for UGC-Based Role-Playing Games [6.5715027492220734]
This paper introduces a novel framework for a text-to-game engine that leverages foundation models to transform simple textual inputs into intricate, multi-modal RPG experiences.<n>The engine dynamically generates game narratives, integrating text, visuals, and mechanics, while adapting characters, environments, and gameplay in realtime based on player interactions.
arXiv Detail & Related papers (2024-07-11T05:33:19Z) - StoryVerse: Towards Co-authoring Dynamic Plot with LLM-based Character Simulation via Narrative Planning [8.851718319632973]
Large Language Models (LLMs) drive the behavior of virtual characters, allowing plots to emerge from interactions between characters and their environments.
We propose a novel plot creation workflow that mediates between a writer's authorial intent and the emergent behaviors from LLM-driven character simulation.
The process creates "living stories" that dynamically adapt to various game world states, resulting in narratives co-created by the author, character simulation, and player.
arXiv Detail & Related papers (2024-05-17T23:04:51Z) - Generating Human Interaction Motions in Scenes with Text Control [66.74298145999909]
We present TeSMo, a method for text-controlled scene-aware motion generation based on denoising diffusion models.
Our approach begins with pre-training a scene-agnostic text-to-motion diffusion model.
To facilitate training, we embed annotated navigation and interaction motions within scenes.
arXiv Detail & Related papers (2024-04-16T16:04:38Z) - Ambient Adventures: Teaching ChatGPT on Developing Complex Stories [8.07595093287034]
Imaginary play can be seen as taking real objects and locations and using them as imaginary objects and locations in virtual scenarios.
We adopted the story generation capability of large language models (LLMs) to obtain the stories used for imaginary play with human-written prompts.
Those generated stories will be simplified and mapped into action sequences that can guide the agent in imaginary play.
arXiv Detail & Related papers (2023-08-03T12:52:49Z) - Promptable Game Models: Text-Guided Game Simulation via Masked Diffusion
Models [68.85478477006178]
We present a Promptable Game Model (PGM) for neural video game simulators.
It allows a user to play the game by prompting it with high- and low-level action sequences.
Most captivatingly, our PGM unlocks the director's mode, where the game is played by specifying goals for the agents in the form of a prompt.
Our method significantly outperforms existing neural video game simulators in terms of rendering quality and unlocks applications beyond the capabilities of the current state of the art.
arXiv Detail & Related papers (2023-03-23T17:43:17Z) - Infusing Commonsense World Models with Graph Knowledge [89.27044249858332]
We study the setting of generating narratives in an open world text adventure game.
A graph representation of the underlying game state can be used to train models that consume and output both grounded graph representations and natural language descriptions and actions.
arXiv Detail & Related papers (2023-01-13T19:58:27Z) - CHAE: Fine-Grained Controllable Story Generation with Characters,
Actions and Emotions [10.694612203803146]
This paper proposes a model for fine-grained control on the story.
It allows the generation of customized stories with characters, corresponding actions and emotions arbitrarily assigned.
It has strong controllability to generate stories according to the fine-grained personalized guidance.
arXiv Detail & Related papers (2022-10-11T07:37:50Z) - Cue Me In: Content-Inducing Approaches to Interactive Story Generation [74.09575609958743]
We focus on the task of interactive story generation, where the user provides the model mid-level sentence abstractions.
We present two content-inducing approaches to effectively incorporate this additional information.
Experimental results from both automatic and human evaluations show that these methods produce more topically coherent and personalized stories.
arXiv Detail & Related papers (2020-10-20T00:36:15Z) - Learning to Simulate Dynamic Environments with GameGAN [109.25308647431952]
In this paper, we aim to learn a simulator by simply watching an agent interact with an environment.
We introduce GameGAN, a generative model that learns to visually imitate a desired game by ingesting screenplay and keyboard actions during training.
arXiv Detail & Related papers (2020-05-25T14:10:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.