Creating Suspenseful Stories: Iterative Planning with Large Language
Models
- URL: http://arxiv.org/abs/2402.17119v1
- Date: Tue, 27 Feb 2024 01:25:52 GMT
- Title: Creating Suspenseful Stories: Iterative Planning with Large Language
Models
- Authors: Kaige Xie, Mark Riedl
- Abstract summary: We propose a novel iterative-prompting-based planning method that is grounded in two theoretical foundations of story suspense.
To the best of our knowledge, this paper is the first attempt at suspenseful story generation with large language models.
- Score: 2.6923151107804055
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automated story generation has been one of the long-standing challenges in
NLP. Among all dimensions of stories, suspense is very common in human-written
stories but relatively under-explored in AI-generated stories. While recent
advances in large language models (LLMs) have greatly promoted language
generation in general, state-of-the-art LLMs are still unreliable when it comes
to suspenseful story generation. We propose a novel iterative-prompting-based
planning method that is grounded in two theoretical foundations of story
suspense from cognitive psychology and narratology. This theory-grounded method
works in a fully zero-shot manner and does not rely on any supervised story
corpora. To the best of our knowledge, this paper is the first attempt at
suspenseful story generation with LLMs. Extensive human evaluations of the
generated suspenseful stories demonstrate the effectiveness of our method.
Related papers
- Agents' Room: Narrative Generation through Multi-step Collaboration [54.98886593802834]
We propose a generation framework inspired by narrative theory that decomposes narrative writing into subtasks tackled by specialized agents.
We show that Agents' Room generates stories preferred by expert evaluators over those produced by baseline systems.
arXiv Detail & Related papers (2024-10-03T15:44:42Z) - Are Large Language Models Capable of Generating Human-Level Narratives? [114.34140090869175]
This paper investigates the capability of LLMs in storytelling, focusing on narrative development and plot progression.
We introduce a novel computational framework to analyze narratives through three discourse-level aspects.
We show that explicit integration of discourse features can enhance storytelling, as is demonstrated by over 40% improvement in neural storytelling.
arXiv Detail & Related papers (2024-07-18T08:02:49Z) - Conveying the Predicted Future to Users: A Case Study of Story Plot
Prediction [14.036772394560238]
We create a system that produces a short description that narrates a predicted plot.
Our goal is to assist writers in crafting a consistent and compelling story arc.
arXiv Detail & Related papers (2023-02-17T20:10:55Z) - The Next Chapter: A Study of Large Language Models in Storytelling [51.338324023617034]
The application of prompt-based learning with large language models (LLMs) has exhibited remarkable performance in diverse natural language processing (NLP) tasks.
This paper conducts a comprehensive investigation, utilizing both automatic and human evaluation, to compare the story generation capacity of LLMs with recent models.
The results demonstrate that LLMs generate stories of significantly higher quality compared to other story generation models.
arXiv Detail & Related papers (2023-01-24T02:44:02Z) - Neural Story Planning [8.600049807193413]
We present an approach to story plot generation that unifies causal planning with neural language models.
Our system infers the preconditions for events in the story and then events that will cause those conditions to become true.
Results indicate that our proposed method produces more coherent plotlines than several strong baselines.
arXiv Detail & Related papers (2022-12-16T21:29:41Z) - Robust Preference Learning for Storytelling via Contrastive
Reinforcement Learning [53.92465205531759]
Controlled automated story generation seeks to generate natural language stories satisfying constraints from natural language critiques or preferences.
We train a contrastive bi-encoder model to align stories with human critiques, building a general purpose preference model.
We further fine-tune the contrastive reward model using a prompt-learning technique to increase story generation robustness.
arXiv Detail & Related papers (2022-10-14T13:21:33Z) - Great Expectations: Unsupervised Inference of Suspense, Surprise and
Salience in Storytelling [3.42658286826597]
The thesis trains a series of deep learning models via only reading stories, a self-supervised (or unsupervised) system.
Narrative theory methods are applied to the knowledge built into deep learning models to directly infer salience, surprise, and salience in stories.
arXiv Detail & Related papers (2022-06-20T11:00:23Z) - Guiding Neural Story Generation with Reader Models [5.935317028008691]
We introduce Story generation with Reader Models (StoRM), a framework in which a reader model is used to reason about the story should progress.
Experiments show that our model produces significantly more coherent and on-topic stories, outperforming baselines in dimensions including plot plausibility and staying on topic.
arXiv Detail & Related papers (2021-12-16T03:44:01Z) - Cue Me In: Content-Inducing Approaches to Interactive Story Generation [74.09575609958743]
We focus on the task of interactive story generation, where the user provides the model mid-level sentence abstractions.
We present two content-inducing approaches to effectively incorporate this additional information.
Experimental results from both automatic and human evaluations show that these methods produce more topically coherent and personalized stories.
arXiv Detail & Related papers (2020-10-20T00:36:15Z) - A Knowledge-Enhanced Pretraining Model for Commonsense Story Generation [98.25464306634758]
We propose to utilize commonsense knowledge from external knowledge bases to generate reasonable stories.
We employ multi-task learning which combines a discriminative objective to distinguish true and fake stories.
Our model can generate more reasonable stories than state-of-the-art baselines, particularly in terms of logic and global coherence.
arXiv Detail & Related papers (2020-01-15T05:42:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.