Codified Foreshadowing-Payoff Text Generation
- URL: http://arxiv.org/abs/2601.07033v1
- Date: Sun, 11 Jan 2026 19:05:37 GMT
- Title: Codified Foreshadowing-Payoff Text Generation
- Authors: Longfei Yun, Kun Zhou, Yupeng Hou, Letian Peng, Jingbo Shang,
- Abstract summary: Foreshadowing and payoff are ubiquitous narrative devices through which authors introduce commitments early in a story and resolve them through concrete, observable outcomes.<n>Existing evaluations largely overlook this structural failure, focusing on surface-level coherence rather than the logical fulfillment of narrative setups.<n>We introduce Codified Foreshadowing-Payoff Generation, a novel framework that reframes narrative quality through the lens of payoff realization.
- Score: 67.01182739162142
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Foreshadowing and payoff are ubiquitous narrative devices through which authors introduce commitments early in a story and resolve them through concrete, observable outcomes. However, despite advances in story generation, large language models (LLMs) frequently fail to bridge these long-range narrative dependencies, often leaving "Chekhov's guns" unfired even when the necessary context is present. Existing evaluations largely overlook this structural failure, focusing on surface-level coherence rather than the logical fulfillment of narrative setups. In this paper, we introduce Codified Foreshadowing-Payoff Generation (CFPG), a novel framework that reframes narrative quality through the lens of payoff realization. Recognizing that LLMs struggle to intuitively grasp the "triggering mechanism" of a foreshadowed event, CFPG transforms narrative continuity into a set of executable causal predicates. By mining and encoding Foreshadow-Trigger-Payoff triples from the BookSum corpus, we provide structured supervision that ensures foreshadowed commitments are not only mentioned but also temporally and logically fulfilled. Experiments demonstrate that CFPG significantly outperforms standard prompting baselines in payoff accuracy and narrative alignment. Our findings suggest that explicitly codifying narrative mechanics is essential for moving LLMs from surface-level fluency to genuine narrative competence.
Related papers
- LaSER: Internalizing Explicit Reasoning into Latent Space for Dense Retrieval [74.72139580745511]
LaSER is a novel self-distillation framework that internalizes explicit reasoning into the latent space of retrievers.<n>Our method successfully combines the reasoning depth of explicit CoT pipelines with the inference efficiency of standard dense retrievers.
arXiv Detail & Related papers (2026-03-02T04:11:18Z) - NarraScore: Bridging Visual Narrative and Musical Dynamics via Hierarchical Affective Control [59.6128550986024]
NarraScore is a hierarchical framework predicated on the core insight that emotion serves as a high-density compression of narrative logic.<n>NarraScore employs a Dual-Branch Injection strategy to reconcile global structure with local dynamism.<n>NarraScore achieves state-of-the-art consistency and narrative alignment with negligible computational overhead.
arXiv Detail & Related papers (2026-02-09T09:39:42Z) - MUSE: A Multi-agent Framework for Unconstrained Story Envisioning via Closed-Loop Cognitive Orchestration [16.61208703961799]
We develop a framework to generate long-form audio-visual stories from a short user prompt.<n>MUSE translates narrative intent into explicit, machine-executable controls over identity, spatial composition, and temporal continuity.<n>MUSE substantially improves long-horizon narrative coherence, cross-modal identity consistency, and cinematic quality compared with representative baselines.
arXiv Detail & Related papers (2026-02-03T02:55:00Z) - NarrativeTrack: Evaluating Video Language Models Beyond the Frame [10.244330591706744]
We introduce NarrativeTrack, the first benchmark to evaluate narrative understanding in MLLMs.<n>We decompose videos into constituent entities and examine their continuity via a Compositional Reasoning (CRP) framework.<n>CRP challenges models to advance from temporal persistence to contextual evolution and fine-grained perceptual reasoning.
arXiv Detail & Related papers (2026-01-03T07:12:55Z) - Living the Novel: A System for Generating Self-Training Timeline-Aware Conversational Agents from Novels [50.43968216132018]
We present an end-to-end system that transforms any literary work into an immersive, multi-character conversational experience.<n>This system is designed to solve two fundamental challenges for LLM-driven characters.
arXiv Detail & Related papers (2025-12-08T11:57:46Z) - NOAH: Benchmarking Narrative Prior driven Hallucination and Omission in Video Large Language Models [8.6767620170781]
Video large language models (Video LLMs) have recently achieved strong performance on tasks such as captioning, summarization, and question answering.<n>Many models and training methods explicitly encourage continuity across events to enhance narrative coherence.<n>We identify this bias, which we call narrative prior, as a key driver of two errors: hallucinations, where non-existent events are introduced or existing ones are misinterpreted, and omissions, where factual events are suppressed because they are misaligned with surrounding context.
arXiv Detail & Related papers (2025-11-09T17:41:11Z) - Cut2Next: Generating Next Shot via In-Context Tuning [93.14744132897428]
Multi-shot generation demands purposeful, film-like transitions and strict cinematic continuity.<n>Current methods often prioritize basic visual consistency, neglecting crucial editing patterns.<n>We introduce Next Shot Generation (NSG): a subsequent, high-quality shot that critically synthesizes professional editing patterns.
arXiv Detail & Related papers (2025-08-11T17:56:59Z) - Finding Flawed Fictions: Evaluating Complex Reasoning in Language Models via Plot Hole Detection [35.550137361809405]
Plot hole detection in stories is a proxy to evaluate language understanding and reasoning in Large Language Models.<n>We introduce FlawedFictionsMaker, a novel algorithm to controllably and carefully synthesize plot holes in human-written stories.<n>We find that state-of-the-art LLMs struggle in accurately solving FlawedFictions regardless of the reasoning effort allowed.
arXiv Detail & Related papers (2025-04-16T09:25:54Z) - Failure Modes of LLMs for Causal Reasoning on Narratives [51.19592551510628]
We investigate the interaction between world knowledge and logical reasoning.<n>We find that state-of-the-art large language models (LLMs) often rely on superficial generalizations.<n>We show that simple reformulations of the task can elicit more robust reasoning behavior.
arXiv Detail & Related papers (2024-10-31T12:48:58Z) - Paragraph-level Commonsense Transformers with Recurrent Memory [77.4133779538797]
We train a discourse-aware model that incorporates paragraph-level information to generate coherent commonsense inferences from narratives.
Our results show that PARA-COMET outperforms the sentence-level baselines, particularly in generating inferences that are both coherent and novel.
arXiv Detail & Related papers (2020-10-04T05:24:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.