PAYADOR: A Minimalist Approach to Grounding Language Models on Structured Data for Interactive Storytelling and Role-playing Games
- URL: http://arxiv.org/abs/2504.07304v1
- Date: Wed, 09 Apr 2025 21:59:31 GMT
- Title: PAYADOR: A Minimalist Approach to Grounding Language Models on Structured Data for Interactive Storytelling and Role-playing Games
- Authors: Santiago Góngora, Luis Chiruzzo, Gonzalo Méndez, Pablo Gervás,
- Abstract summary: PAYADOR focuses on predicting the outcomes of the actions instead of representing the actions themselves.<n>We make this contribution open-source, so it can be adapted and used for other related research on unleashing the co-creativity power of RPGs.
- Score: 1.53119329713143
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Every time an Interactive Storytelling (IS) system gets a player input, it is facing the world-update problem. Classical approaches to this problem consist in mapping that input to known preprogrammed actions, what can severely constrain the free will of the player. When the expected experience has a strong focus on improvisation, like in Role-playing Games (RPGs), this problem is critical. In this paper we present PAYADOR, a different approach that focuses on predicting the outcomes of the actions instead of representing the actions themselves. To implement this approach, we ground a Large Language Model to a minimal representation of the fictional world, obtaining promising results. We make this contribution open-source, so it can be adapted and used for other related research on unleashing the co-creativity power of RPGs.
Related papers
- Playpen: An Environment for Exploring Learning Through Conversational Interaction [81.67330926729015]
We look at what extent synthetic interaction in what we call Dialogue Games can provide a learning signal.
We investigate the effects of supervised fine-tuning on this data.
We release the framework and the baseline training setups in the hope that this can foster research in this promising new direction.
arXiv Detail & Related papers (2025-04-11T14:49:33Z) - Collaborative Storytelling and LLM: A Linguistic Analysis of Automatically-Generated Role-Playing Game Sessions [55.2480439325792]
Role-playing games (RPG) are games in which players interact with one another to create narratives.
This emerging form of shared narrative, primarily oral, is receiving increasing attention.
In this paper, we aim to discover to what extent the language of Large Language Models (LLMs) exhibit oral or written features when asked to generate an RPG session.
arXiv Detail & Related papers (2025-03-26T15:10:47Z) - Understanding Players as if They Are Talking to the Game in a Customized Language: A Pilot Study [3.4333699338998693]
This pilot study explores the application of language models (LMs) to model game event sequences.
We transform raw event data into textual sequences and pretraining a Longformer model on this data.
The results demonstrate the potential of self-supervised LMs in enhancing game design and personalization without relying on ground-truth labels.
arXiv Detail & Related papers (2024-10-24T09:59:10Z) - Can VLMs Play Action Role-Playing Games? Take Black Myth Wukong as a Study Case [20.14197375326218]
This research aims to provide new insights and directions for applying multimodal agents in complex action game environments.
We select an ARPG, Black Myth: Wukong'', as a research platform to explore the capability boundaries of existing vision language models.
We will release a human operation dataset containing recorded gameplay videos and operation logs, including mouse and keyboard actions.
arXiv Detail & Related papers (2024-09-19T16:30:25Z) - What if Red Can Talk? Dynamic Dialogue Generation Using Large Language Models [0.0]
We introduce a dialogue filler framework that utilizes large language models (LLMs) to generate dynamic and contextually appropriate character interactions.
We test this framework within the environments of Final Fantasy VII Remake and Pokemon.
This study aims to assist developers in crafting more nuanced filler dialogues, thereby enriching player immersion and enhancing the overall RPG experience.
arXiv Detail & Related papers (2024-07-29T19:12:18Z) - Dialogue Action Tokens: Steering Language Models in Goal-Directed Dialogue with a Multi-Turn Planner [51.77263363285369]
We present an approach called Dialogue Action Tokens that adapts language model agents to plan goal-directed dialogues.
The core idea is to treat each utterance as an action, thereby converting dialogues into games where existing approaches such as reinforcement learning can be applied.
arXiv Detail & Related papers (2024-06-17T18:01:32Z) - Tachikuma: Understading Complex Interactions with Multi-Character and
Novel Objects by Large Language Models [67.20964015591262]
We introduce a benchmark named Tachikuma, comprising a Multiple character and novel Object based interaction Estimation task and a supporting dataset.
The dataset captures log data from real-time communications during gameplay, providing diverse, grounded, and complex interactions for further explorations.
We present a simple prompting baseline and evaluate its performance, demonstrating its effectiveness in enhancing interaction understanding.
arXiv Detail & Related papers (2023-07-24T07:40:59Z) - About latent roles in forecasting players in team sports [47.066729480128856]
Team sports contain a significant social component that influences interactions between teammates and opponents.
We create RolFor, a novel end-to-end model for Role-based Forecasting.
arXiv Detail & Related papers (2023-04-17T13:33:23Z) - Infusing Commonsense World Models with Graph Knowledge [89.27044249858332]
We study the setting of generating narratives in an open world text adventure game.
A graph representation of the underlying game state can be used to train models that consume and output both grounded graph representations and natural language descriptions and actions.
arXiv Detail & Related papers (2023-01-13T19:58:27Z) - Keep CALM and Explore: Language Models for Action Generation in
Text-based Games [27.00685301984832]
We propose the Contextual Action Language Model (CALM) to generate a compact set of action candidates at each game state.
We combine CALM with a reinforcement learning agent which re-ranks the generated action candidates to maximize in-game rewards.
arXiv Detail & Related papers (2020-10-06T17:36:29Z) - Exploration Based Language Learning for Text-Based Games [72.30525050367216]
This work presents an exploration and imitation-learning-based agent capable of state-of-the-art performance in playing text-based computer games.
Text-based computer games describe their world to the player through natural language and expect the player to interact with the game using text.
These games are of interest as they can be seen as a testbed for language understanding, problem-solving, and language generation by artificial agents.
arXiv Detail & Related papers (2020-01-24T03:03:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.