Living the Novel: A System for Generating Self-Training Timeline-Aware Conversational Agents from Novels
- URL: http://arxiv.org/abs/2512.07474v1
- Date: Mon, 08 Dec 2025 11:57:46 GMT
- Title: Living the Novel: A System for Generating Self-Training Timeline-Aware Conversational Agents from Novels
- Authors: Yifei Huang, Tianyu Yan, Sitong Gong, Xiwei Gao, Caixin Kang, Ruicong Liu, Huchuan Lu, Bo Zheng,
- Abstract summary: We present an end-to-end system that transforms any literary work into an immersive, multi-character conversational experience.<n>This system is designed to solve two fundamental challenges for LLM-driven characters.
- Score: 50.43968216132018
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present the Living Novel, an end-to-end system that transforms any literary work into an immersive, multi-character conversational experience. This system is designed to solve two fundamental challenges for LLM-driven characters. Firstly, generic LLMs suffer from persona drift, often failing to stay in character. Secondly, agents often exhibit abilities that extend beyond the constraints of the story's world and logic, leading to both narrative incoherence (spoiler leakage) and robustness failures (frame-breaking). To address these challenges, we introduce a novel two-stage training pipeline. Our Deep Persona Alignment (DPA) stage uses data-free reinforcement finetuning to instill deep character fidelity. Our Coherence and Robustness Enhancing (CRE) stage then employs a story-time-aware knowledge graph and a second retrieval-grounded training pass to architecturally enforce these narrative constraints. We validate our system through a multi-phase evaluation using Jules Verne's Twenty Thousand Leagues Under the Sea. A lab study with a detailed ablation of system components is followed by a 5-day in-the-wild diary study. Our DPA pipeline helps our specialized model outperform GPT-4o on persona-specific metrics, and our CRE stage achieves near-perfect performance in coherence and robustness measures. Our study surfaces practical design guidelines for AI-driven narrative systems: we find that character-first self-training is foundational for believability, while explicit story-time constraints are crucial for sustaining coherent, interruption-resilient mobile-web experiences.
Related papers
- Retell, Reward, Repeat: Reinforcement Learning for Narrative Theory-Informed Story Generation [5.151910664667141]
We use Todorov's Theory of Narrative Equilibrium to establish principles that define desirable ASG qualities.<n>We prompt 7B and 14B LLM-as-judge models with our principles to test alignment with human annotators.<n>We show that d-RLAIF offers a viable alternative to supervised fine-tuning (SFT)
arXiv Detail & Related papers (2026-01-23T23:23:42Z) - Beyond Direct Generation: A Decomposed Approach to Well-Crafted Screenwriting with LLMs [6.802263659531867]
Large Language Models (LLMs) show great potential in creative writing.<n>Direct end-to-end generation approaches often fail to produce well-crafted screenplays.<n>We introduce Dual-Stage Refinement (DSR), a framework that decouples creative narrative generation from format conversion.
arXiv Detail & Related papers (2025-10-27T09:41:29Z) - EvolvTrip: Enhancing Literary Character Understanding with Temporal Theory-of-Mind Graphs [23.86303464364475]
We introduce EvolvTrip, a perspective-aware temporal knowledge graph that tracks psychological development throughout narratives.<n>Our findings highlight the importance of explicit representation of temporal character mental states in narrative comprehension.
arXiv Detail & Related papers (2025-06-16T16:05:17Z) - If an LLM Were a Character, Would It Know Its Own Story? Evaluating Lifelong Learning in LLMs [55.8331366739144]
We introduce LIFESTATE-BENCH, a benchmark designed to assess lifelong learning in large language models (LLMs)<n>Our fact checking evaluation probes models' self-awareness, episodic memory retrieval, and relationship tracking, across both parametric and non-parametric approaches.
arXiv Detail & Related papers (2025-03-30T16:50:57Z) - Learning to Reason for Long-Form Story Generation [84.09733333295338]
We propose a general story-generation task (Next-Chapter Prediction) and a reward formulation (Verified Rewards via Completion Likelihood Improvement)<n>We learn to reason over a story's condensed information and generate a detailed plan for the next chapter.<n>Our reasoning is evaluated via the chapters it helps a story-generator create, and compared against non-trained and supervised finetuning (SFT) baselines.
arXiv Detail & Related papers (2025-03-28T18:48:26Z) - Towards Enhanced Immersion and Agency for LLM-based Interactive Drama [55.770617779283064]
This paper begins with understanding interactive drama from two aspects: Immersion, the player's feeling of being present in the story, and Agency.<n>To enhance these two aspects, we first propose Playwriting-guided Generation, a novel method that helps LLMs craft dramatic stories with substantially improved structures and narrative quality.
arXiv Detail & Related papers (2025-02-25T06:06:16Z) - Agents' Room: Narrative Generation through Multi-step Collaboration [54.98886593802834]
We propose a generation framework inspired by narrative theory that decomposes narrative writing into subtasks tackled by specialized agents.<n>We show that Agents' Room generates stories preferred by expert evaluators over those produced by baseline systems.
arXiv Detail & Related papers (2024-10-03T15:44:42Z) - Are Large Language Models Capable of Generating Human-Level Narratives? [114.34140090869175]
This paper investigates the capability of LLMs in storytelling, focusing on narrative development and plot progression.
We introduce a novel computational framework to analyze narratives through three discourse-level aspects.
We show that explicit integration of discourse features can enhance storytelling, as is demonstrated by over 40% improvement in neural storytelling.
arXiv Detail & Related papers (2024-07-18T08:02:49Z) - Inferring the Reader: Guiding Automated Story Generation with
Commonsense Reasoning [12.264880519328353]
We introduce Commonsense-inference Augmented neural StoryTelling (CAST), a framework for introducing commonsense reasoning into the generation process.
We find that our CAST method produces significantly more coherent, on-topic, enjoyable and fluent stories than existing models in both the single-character and two-character settings.
arXiv Detail & Related papers (2021-05-04T06:40:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.