PANGeA: Procedural Artificial Narrative using Generative AI for Turn-Based Video Games
- URL: http://arxiv.org/abs/2404.19721v3
- Date: Tue, 9 Jul 2024 23:45:27 GMT
- Title: PANGeA: Procedural Artificial Narrative using Generative AI for Turn-Based Video Games
- Authors: Steph Buongiorno, Lawrence Jake Klinkert, Tanishq Chawla, Zixin Zhuang, Corey Clark,
- Abstract summary: This research introduces Procedural Artificial Narrative using Generative AI (PANGeA)
PANGeA is a structured approach for leveraging large language models (LLMs) to generate narrative content for turn-based role-playing video games (RPGs)
The NPCs generated by PANGeA are personality-biased and express traits from the Big 5 Personality Model in their generated responses.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This research introduces Procedural Artificial Narrative using Generative AI (PANGeA), a structured approach for leveraging large language models (LLMs), guided by a game designer's high-level criteria, to generate narrative content for turn-based role-playing video games (RPGs). Distinct from prior applications of LLMs used for video game design, PANGeA innovates by not only generating game level data (which includes, but is not limited to, setting, key items, and non-playable characters (NPCs)), but by also fostering dynamic, free-form interactions between the player and the environment that align with the procedural game narrative. The NPCs generated by PANGeA are personality-biased and express traits from the Big 5 Personality Model in their generated responses. PANGeA addresses challenges behind ingesting free-form text input, which can prompt LLM responses beyond the scope of the game narrative. A novel validation system that uses the LLM's intelligence evaluates text input and aligns generated responses with the unfolding narrative. Making these interactions possible, PANGeA is supported by a server that hosts a custom memory system that supplies context for augmenting generated responses thus aligning them with the procedural narrative. For its broad application, the server has a REST interface enabling any game engine to integrate directly with PANGeA, as well as an LLM interface adaptable with local or private LLMs. PANGeA's ability to foster dynamic narrative generation by aligning responses with the procedural narrative is demonstrated through an empirical study and ablation test of two versions of a demo game. These are, a custom, browser-based GPT and a Unity demo. As the results show, PANGeA holds potential to assist game designers in using LLMs to generate narrative-consistent content even when provided varied and unpredictable, free-form text input.
Related papers
- A Text-to-Game Engine for UGC-Based Role-Playing Games [6.5715027492220734]
This paper introduces a new framework for a text-to-game engine that utilizes foundation models to convert simple textual inputs into complex, interactive RPG experiences.
The engine dynamically renders the game story in a multi-modal format and adjusts the game character, environment, and mechanics in real-time in response to player actions.
arXiv Detail & Related papers (2024-07-11T05:33:19Z) - LLaRA: Supercharging Robot Learning Data for Vision-Language Policy [56.505551117094534]
Large Language Models (LLMs) equipped with extensive world knowledge and strong reasoning skills can tackle diverse tasks across domains.
We propose LLaRA: Large Language and Robotics Assistant, a framework which formulates robot action policy as conversations.
arXiv Detail & Related papers (2024-06-28T17:59:12Z) - Capturing Minds, Not Just Words: Enhancing Role-Playing Language Models with Personality-Indicative Data [58.92110996840019]
We propose to enhance role-playing language models (RPLMs) via personality-indicative data.
Specifically, we leverage questions from psychological scales and distill advanced RPAs to generate dialogues that grasp the minds of characters.
Experimental results validate that RPLMs trained with our dataset exhibit advanced role-playing capabilities for both general and personality-related evaluations.
arXiv Detail & Related papers (2024-06-27T06:24:00Z) - SimsChat: A Customisable Persona-Driven Role-Playing Agent [29.166067413153353]
Large Language Models (LLMs) possess the capability to understand human instructions and generate high-quality text.
We introduce the Customisable Conversation Agent Framework, which employs LLMs to simulate real-world characters.
We present SimsChat, a freely customisable role-playing agent.
arXiv Detail & Related papers (2024-06-25T22:44:17Z) - StoryVerse: Towards Co-authoring Dynamic Plot with LLM-based Character Simulation via Narrative Planning [8.851718319632973]
Large Language Models (LLMs) drive the behavior of virtual characters, allowing plots to emerge from interactions between characters and their environments.
We propose a novel plot creation workflow that mediates between a writer's authorial intent and the emergent behaviors from LLM-driven character simulation.
The process creates "living stories" that dynamically adapt to various game world states, resulting in narratives co-created by the author, character simulation, and player.
arXiv Detail & Related papers (2024-05-17T23:04:51Z) - Game Generation via Large Language Models [3.4051285393187327]
This paper investigates the game generation via large language models (LLMs)
Based on video game description language, this paper proposes an LLM-based framework to generate game rules and levels simultaneously.
arXiv Detail & Related papers (2024-04-11T10:06:05Z) - GENEVA: GENErating and Visualizing branching narratives using LLMs [15.43734266732214]
textbfGENEVA, a prototype tool, generates a rich narrative graph with branching and reconverging storylines.
textbfGENEVA has the potential to assist in game development, simulations, and other applications with game-like properties.
arXiv Detail & Related papers (2023-11-15T18:55:45Z) - BuboGPT: Enabling Visual Grounding in Multi-Modal LLMs [101.50522135049198]
BuboGPT is a multi-modal LLM with visual grounding that can perform cross-modal interaction between vision, audio and language.
Our contributions are two-fold: 1) An off-the-shelf visual grounding module based on SAM that extracts entities in a sentence and find corresponding masks in the image.
Our experiments show that BuboGPT achieves impressive multi-modality understanding and visual grounding abilities during the interaction with human.
arXiv Detail & Related papers (2023-07-17T15:51:47Z) - SPRING: Studying the Paper and Reasoning to Play Games [102.5587155284795]
We propose a novel approach, SPRING, to read the game's original academic paper and use the knowledge learned to reason and play the game through a large language model (LLM)
In experiments, we study the quality of in-context "reasoning" induced by different forms of prompts under the setting of the Crafter open-world environment.
Our experiments suggest that LLMs, when prompted with consistent chain-of-thought, have great potential in completing sophisticated high-level trajectories.
arXiv Detail & Related papers (2023-05-24T18:14:35Z) - Promptable Game Models: Text-Guided Game Simulation via Masked Diffusion
Models [68.85478477006178]
We present a Promptable Game Model (PGM) for neural video game simulators.
It allows a user to play the game by prompting it with high- and low-level action sequences.
Most captivatingly, our PGM unlocks the director's mode, where the game is played by specifying goals for the agents in the form of a prompt.
Our method significantly outperforms existing neural video game simulators in terms of rendering quality and unlocks applications beyond the capabilities of the current state of the art.
arXiv Detail & Related papers (2023-03-23T17:43:17Z) - Check Your Facts and Try Again: Improving Large Language Models with
External Knowledge and Automated Feedback [127.75419038610455]
Large language models (LLMs) are able to generate human-like, fluent responses for many downstream tasks.
This paper proposes a LLM-Augmenter system, which augments a black-box LLM with a set of plug-and-play modules.
arXiv Detail & Related papers (2023-02-24T18:48:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.