CALYPSO: LLMs as Dungeon Masters' Assistants
- URL: http://arxiv.org/abs/2308.07540v1
- Date: Tue, 15 Aug 2023 02:57:00 GMT
- Title: CALYPSO: LLMs as Dungeon Masters' Assistants
- Authors: Andrew Zhu and Lara J. Martin and Andrew Head and Chris Callison-Burch
- Abstract summary: Large language models (LLMs) have shown remarkable abilities to generate coherent natural language text.
We introduce CALYPSO, a system of LLM-powered interfaces that support DMs with information and inspiration specific to their own scenario.
When given access to CALYPSO, DMs reported that it generated high-fidelity text suitable for direct presentation to players, and low-fidelity ideas that the DM could develop further while maintaining their creative agency.
- Score: 46.61924662589895
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The role of a Dungeon Master, or DM, in the game Dungeons & Dragons is to
perform multiple tasks simultaneously. The DM must digest information about the
game setting and monsters, synthesize scenes to present to other players, and
respond to the players' interactions with the scene. Doing all of these tasks
while maintaining consistency within the narrative and story world is no small
feat of human cognition, making the task tiring and unapproachable to new
players. Large language models (LLMs) like GPT-3 and ChatGPT have shown
remarkable abilities to generate coherent natural language text. In this paper,
we conduct a formative evaluation with DMs to establish the use cases of LLMs
in D&D and tabletop gaming generally. We introduce CALYPSO, a system of
LLM-powered interfaces that support DMs with information and inspiration
specific to their own scenario. CALYPSO distills game context into bite-sized
prose and helps brainstorm ideas without distracting the DM from the game. When
given access to CALYPSO, DMs reported that it generated high-fidelity text
suitable for direct presentation to players, and low-fidelity ideas that the DM
could develop further while maintaining their creative agency. We see CALYPSO
as exemplifying a paradigm of AI-augmented tools that provide synchronous
creative assistance within established game worlds, and tabletop gaming more
broadly.
Related papers
- What if Red Can Talk? Dynamic Dialogue Generation Using Large Language Models [0.0]
We introduce a dialogue filler framework that utilizes large language models (LLMs) to generate dynamic and contextually appropriate character interactions.
We test this framework within the environments of Final Fantasy VII Remake and Pokemon.
This study aims to assist developers in crafting more nuanced filler dialogues, thereby enriching player immersion and enhancing the overall RPG experience.
arXiv Detail & Related papers (2024-07-29T19:12:18Z) - Large Language Models are Superpositions of All Characters: Attaining
Arbitrary Role-play via Self-Alignment [62.898963074989766]
We introduce Ditto, a self-alignment method for role-play.
This method creates a role-play training set comprising 4,000 characters, surpassing the scale of currently available datasets by tenfold.
We present the first comprehensive cross-supervision alignment experiment in the role-play domain.
arXiv Detail & Related papers (2024-01-23T03:56:22Z) - FIREBALL: A Dataset of Dungeons and Dragons Actual-Play with Structured
Game State Information [75.201485544517]
We present FIREBALL, a large dataset containing nearly 25,000 unique sessions from real D&D gameplay on Discord with true game state info.
We demonstrate that FIREBALL can improve natural language generation (NLG) by using Avrae state information.
arXiv Detail & Related papers (2023-05-02T15:36:10Z) - I Cast Detect Thoughts: Learning to Converse and Guide with Intents and
Theory-of-Mind in Dungeons and Dragons [82.28503603235364]
We study teacher-student natural language interactions in a goal-driven environment in Dungeons and Dragons.
Our approach is to decompose and model these interactions into (1) the Dungeon Master's intent to guide players toward a given goal; (2) the DM's guidance utterance to the players expressing this intent; and (3) a theory-of-mind (ToM) model that anticipates the players' reaction to the guidance one turn into the future.
arXiv Detail & Related papers (2022-12-20T08:06:55Z) - Dungeons and Dragons as a Dialog Challenge for Artificial Intelligence [28.558934742150022]
We frame D&D as a dialogue system challenge, where the tasks are to both generate the next conversational turn in the game and predict the state of the game given the dialogue history.
We create a gameplay dataset consisting of nearly 900 games, with a total of 7,000 players, 800,000 dialogue turns, 500,000 dice rolls, and 58 million words.
We train a large language model (LM) to generate the next game turn, conditioning it on different information.
arXiv Detail & Related papers (2022-10-13T15:43:39Z) - A Mixture-of-Expert Approach to RL-based Dialogue Management [56.08449336469477]
We use reinforcement learning to develop a dialogue agent that avoids being short-sighted (outputting generic utterances) and maximizes overall user satisfaction.
Most existing RL approaches to DM train the agent at the word-level, and thus, have to deal with aly complex action space even for a medium-size vocabulary.
We develop a RL-based DM using a novel mixture of expert language model (MoE-LM) that consists of (i) a LM capable of learning diverse semantics for conversation histories, (ii) a number of specialized LMs (or experts) capable of generating utterances corresponding to a
arXiv Detail & Related papers (2022-05-31T19:00:41Z) - How to Motivate Your Dragon: Teaching Goal-Driven Agents to Speak and
Act in Fantasy Worlds [47.7511759322784]
We seek to create agents that both act and communicate with other agents in pursuit of a goal.
We introduce a reinforcement learning system that incorporates large-scale language modeling-based and commonsense reasoning-based pre-training.
We conduct zero-shot evaluations using held-out human expert demonstrations, showing that our agents are able to act consistently and talk naturally with respect to their motivations.
arXiv Detail & Related papers (2020-10-01T21:06:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.