Narrative Theory-Driven LLM Methods for Automatic Story Generation and Understanding: A Survey
- URL: http://arxiv.org/abs/2602.15851v1
- Date: Fri, 23 Jan 2026 23:30:42 GMT
- Title: Narrative Theory-Driven LLM Methods for Automatic Story Generation and Understanding: A Survey
- Authors: David Y. Liu, Aditya Joshi, Paul Dawson,
- Abstract summary: This survey examines how natural language processing (NLP) research engages with fields of narrative studies.<n>We propose a taxonomy for ongoing efforts that reflect established distinctions in narratology.
- Score: 5.628092290410201
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Applications of narrative theories using large language models (LLMs) deliver promising use-cases in automatic story generation and understanding tasks. Our survey examines how natural language processing (NLP) research engages with fields of narrative studies, and proposes a taxonomy for ongoing efforts that reflect established distinctions in narratology. We discover patterns in the following: narrative datasets and tasks, narrative theories and NLP pipeline and methodological trends in prompting and fine-tuning. We highlight how LLMs enable easy connections of NLP pipelines with abstract narrative concepts and opportunities for interdisciplinary collaboration. Challenges remain in attempts to work towards any unified definition or benchmark of narrative related tasks, making model comparison difficult. For future directions, instead of the pursuit of a single, generalised benchmark for 'narrative quality', we believe that progress benefits more from efforts that focus on the following: defining and improving theory-based metrics for individual narrative attributes to incrementally improve model performance; conducting large-scale, theory-driven literary/social/cultural analysis; and creating experiments where outputs can be used to validate or refine narrative theories. This work provides a contextual foundation for more systematic and theoretically informed narrative research in NLP by providing an overview to ongoing research efforts and the broader narrative studies landscape.
Related papers
- Context-Aware Hierarchical Taxonomy Generation for Scientific Papers via LLM-Guided Multi-Aspect Clustering [59.54662810933882]
Existing taxonomy construction methods, leveraging unsupervised clustering or direct prompting of large language models, often lack coherence and granularity.<n>We propose a novel context-aware hierarchical taxonomy generation framework that integrates LLM-guided multi-aspect encoding with dynamic clustering.
arXiv Detail & Related papers (2025-09-23T15:12:58Z) - Re:Verse -- Can Your VLM Read a Manga? [14.057881684215047]
Current Vision Language Models (VLMs) demonstrate a critical gap between surface-level recognition and deep narrative reasoning.<n>We introduce a novel evaluation framework that combines fine-grained multimodal annotation, cross-modal embedding analysis, and retrieval-augmented assessment.<n>We conduct the first systematic study of long-form narrative understanding in VLMs through three core evaluation axes: generative storytelling, contextual dialogue grounding, and temporal reasoning.
arXiv Detail & Related papers (2025-08-11T22:40:05Z) - LogiDynamics: Unraveling the Dynamics of Inductive, Abductive and Deductive Logical Inferences in LLM Reasoning [74.0242521818214]
This paper systematically investigates the comparative dynamics of inductive (System 1) versus abductive/deductive (System 2) inference in large language models (LLMs)<n>We utilize a controlled analogical reasoning environment, varying modality (textual, visual, symbolic), difficulty, and task format (MCQ / free-text)<n>Our analysis reveals System 2 pipelines generally excel, particularly in visual/symbolic modalities and harder tasks, while System 1 is competitive for textual and easier problems.
arXiv Detail & Related papers (2025-02-16T15:54:53Z) - MLD-EA: Check and Complete Narrative Coherence by Introducing Emotions and Actions [8.06073345741722]
We introduce the Missing Logic Detector by Emotion and Action (MLD-EA) model.<n>It identifies narrative gaps and generates coherent sentences that integrate seamlessly with the story's emotional and logical flow.<n>This work fills a gap in NLP research and advances border goals of creating more sophisticated and reliable story-generation systems.
arXiv Detail & Related papers (2024-12-03T23:01:21Z) - Narrative Analysis of True Crime Podcasts With Knowledge Graph-Augmented Large Language Models [8.78598447041169]
Large language models (LLMs) still struggle with complex narrative arcs as well as narratives containing conflicting information.
Recent work indicates LLMs augmented with external knowledge bases can improve the accuracy and interpretability of the resulting models.
In this work, we analyze the effectiveness of applying knowledge graphs (KGs) in understanding true-crime podcast data.
arXiv Detail & Related papers (2024-11-01T21:49:00Z) - Systematic Task Exploration with LLMs: A Study in Citation Text Generation [63.50597360948099]
Large language models (LLMs) bring unprecedented flexibility in defining and executing complex, creative natural language generation (NLG) tasks.
We propose a three-component research framework that consists of systematic input manipulation, reference data, and output measurement.
We use this framework to explore citation text generation -- a popular scholarly NLP task that lacks consensus on the task definition and evaluation metric.
arXiv Detail & Related papers (2024-07-04T16:41:08Z) - Narrative Action Evaluation with Prompt-Guided Multimodal Interaction [60.281405999483]
Narrative action evaluation (NAE) aims to generate professional commentary that evaluates the execution of an action.
NAE is a more challenging task because it requires both narrative flexibility and evaluation rigor.
We propose a prompt-guided multimodal interaction framework to facilitate the interaction between different modalities of information.
arXiv Detail & Related papers (2024-04-22T17:55:07Z) - Fine-Grained Modeling of Narrative Context: A Coherence Perspective via Retrospective Questions [48.18584733906447]
This work introduces an original and practical paradigm for narrative comprehension, stemming from the characteristics that individual passages within narratives tend to be more cohesively related than isolated.
We propose a fine-grained modeling of narrative context, by formulating a graph dubbed NarCo, which explicitly depicts task-agnostic coherence dependencies.
arXiv Detail & Related papers (2024-02-21T06:14:04Z) - Are NLP Models Good at Tracing Thoughts: An Overview of Narrative
Understanding [21.900015612952146]
Narrative understanding involves capturing the author's cognitive processes, providing insights into their knowledge, intentions, beliefs, and desires.
Although large language models (LLMs) excel in generating grammatically coherent text, their ability to comprehend the author's thoughts remains uncertain.
This hinders the practical applications of narrative understanding.
arXiv Detail & Related papers (2023-10-28T18:47:57Z) - Self-Consistent Narrative Prompts on Abductive Natural Language
Inference [42.201304482932706]
Abduction has long been seen as crucial for narrative comprehension and reasoning about everyday situations.
We propose a prompt tuning model $alpha$-PACE, which takes self-consistency and inter-sentential coherence into consideration.
arXiv Detail & Related papers (2023-09-15T10:48:10Z) - An Inclusive Notion of Text [69.36678873492373]
We argue that clarity on the notion of text is crucial for reproducible and generalizable NLP.
We introduce a two-tier taxonomy of linguistic and non-linguistic elements that are available in textual sources and can be used in NLP modeling.
arXiv Detail & Related papers (2022-11-10T14:26:43Z) - Positioning yourself in the maze of Neural Text Generation: A
Task-Agnostic Survey [54.34370423151014]
This paper surveys the components of modeling approaches relaying task impacts across various generation tasks such as storytelling, summarization, translation etc.
We present an abstraction of the imperative techniques with respect to learning paradigms, pretraining, modeling approaches, decoding and the key challenges outstanding in the field in each of them.
arXiv Detail & Related papers (2020-10-14T17:54:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.