Design Techniques for LLM-Powered Interactive Storytelling: A Case Study of the Dramamancer System
- URL: http://arxiv.org/abs/2601.18785v1
- Date: Mon, 26 Jan 2026 18:51:20 GMT
- Title: Design Techniques for LLM-Powered Interactive Storytelling: A Case Study of the Dramamancer System
- Authors: Tiffany Wang, Yuqian Sun, Yi Wang, Melissa Roemmele, John Joon Young Chung, Max Kreminski,
- Abstract summary: Dramamancer is a system that transforms author-created story schemas into player-driven playthroughs.<n>This extended abstract outlines some design techniques and evaluation considerations associated with this system.
- Score: 16.029360256664685
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The rise of Large Language Models (LLMs) has enabled a new paradigm for bridging authorial intent and player agency in interactive narrative. We consider this paradigm through the example of Dramamancer, a system that uses an LLM to transform author-created story schemas into player-driven playthroughs. This extended abstract outlines some design techniques and evaluation considerations associated with this system.
Related papers
- OPEN-THEATRE: An Open-Source Toolkit for LLM-based Interactive Drama [62.00761178362677]
Open-Theatre is the first open-source toolkit for experiencing and customizing LLM-based interactive drama.<n>It refines prior work with an efficient multi-agent architecture and a hierarchical retrieval-based memory system.
arXiv Detail & Related papers (2025-09-20T14:53:14Z) - Integrating Visual Interpretation and Linguistic Reasoning for Math Problem Solving [61.992824291296444]
Current large vision-language models (LVLMs) typically employ a connector module to link visual features with text embeddings of large language models (LLMs)<n>This paper proposes a paradigm shift: instead of training end-to-end vision-language reasoning models, we advocate for developing a decoupled reasoning framework.
arXiv Detail & Related papers (2025-05-23T08:18:00Z) - Agent-Centric Projection of Prompting Techniques and Implications for Synthetic Training Data for Large Language Models [0.8879149917735942]
This paper introduces and explains the concepts of linear contexts (a single, continuous sequence of interactions) and non-linear contexts (branching or multi-path) in Large Language Models (LLMs)<n>These concepts enable the development of an agent-centric projection of prompting techniques, a framework that can reveal deep connections between prompting strategies and multi-agent systems.
arXiv Detail & Related papers (2025-01-14T03:26:43Z) - From Linguistic Giants to Sensory Maestros: A Survey on Cross-Modal Reasoning with Large Language Models [56.9134620424985]
Cross-modal reasoning (CMR) is increasingly recognized as a crucial capability in the progression toward more sophisticated artificial intelligence systems.
The recent trend of deploying Large Language Models (LLMs) to tackle CMR tasks has marked a new mainstream of approaches for enhancing their effectiveness.
This survey offers a nuanced exposition of current methodologies applied in CMR using LLMs, classifying these into a detailed three-tiered taxonomy.
arXiv Detail & Related papers (2024-09-19T02:51:54Z) - Visual Prompting in Multimodal Large Language Models: A Survey [95.75225825537528]
Multimodal large language models (MLLMs) equip pre-trained large-language models (LLMs) with visual capabilities.
Visual prompting has emerged for more fine-grained and free-form visual instructions.
This paper focuses on visual prompting, prompt generation, compositional reasoning, and prompt learning.
arXiv Detail & Related papers (2024-09-05T08:47:34Z) - Rethinking Visual Prompting for Multimodal Large Language Models with External Knowledge [76.45868419402265]
multimodal large language models (MLLMs) have made significant strides by training on vast high-quality image-text datasets.
However, the inherent difficulty in explicitly conveying fine-grained or spatially dense information in text, such as masks, poses a challenge for MLLMs.
This paper proposes a new visual prompt approach to integrate fine-grained external knowledge, gleaned from specialized vision models, into MLLMs.
arXiv Detail & Related papers (2024-07-05T17:43:30Z) - Guiding and Diversifying LLM-Based Story Generation via Answer Set Programming [1.7889842797216124]
Large language models (LLMs) are capable of generating stories in response to open-ended user requests.
We propose using a higher-level and more abstract symbolic specification of high-level story structure to guide and diversify story generation.
arXiv Detail & Related papers (2024-06-01T21:14:25Z) - Online Advertisements with LLMs: Opportunities and Challenges [51.96140910798771]
This paper explores the potential for leveraging Large Language Models (LLM) in the realm of online advertising systems.
We introduce a general framework for LLM advertisement, consisting of modification, bidding, prediction, and auction modules.
arXiv Detail & Related papers (2023-11-11T02:13:32Z) - THEaiTRE: Artificial Intelligence to Write a Theatre Play [4.450488404542801]
THEaiTRE is a project aimed at automatic generation of theatre play scripts.
We plan to adopt generative neural language models and hierarchical generation approaches, supported by summarization and machine translation methods.
arXiv Detail & Related papers (2020-06-25T19:24:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.