Re3: Generating Longer Stories With Recursive Reprompting and Revision
- URL: http://arxiv.org/abs/2210.06774v2
- Date: Fri, 14 Oct 2022 19:59:30 GMT
- Title: Re3: Generating Longer Stories With Recursive Reprompting and Revision
- Authors: Kevin Yang, Yuandong Tian, Nanyun Peng, Dan Klein
- Abstract summary: We consider the problem of automatically generating longer stories of over two thousand words.
Compared to prior work on shorter stories, long-range plot coherence and relevance are more central challenges here.
We propose the Recursive Reprompting and Revision framework (Re3) to address these challenges.
- Score: 83.99558005056817
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the problem of automatically generating longer stories of over
two thousand words. Compared to prior work on shorter stories, long-range plot
coherence and relevance are more central challenges here. We propose the
Recursive Reprompting and Revision framework (Re3) to address these challenges
by (a) prompting a general-purpose language model to construct a structured
overarching plan, and (b) generating story passages by repeatedly injecting
contextual information from both the plan and current story state into a
language model prompt. We then revise by (c) reranking different continuations
for plot coherence and premise relevance, and finally (d) editing the best
continuation for factual consistency. Compared to similar-length stories
generated directly from the same base model, human evaluators judged
substantially more of Re3's stories as having a coherent overarching plot (by
14% absolute increase), and relevant to the given initial premise (by 20%).
Related papers
- Improving Pacing in Long-Form Story Planning [55.39443681232538]
We propose a CONCrete Outline ConTrol system to improve pacing when automatically generating story outlines.
We first train a concreteness evaluator to judge which of two events is more concrete.
In this work, we explore a vaguest-first expansion procedure that aims for uniform pacing.
arXiv Detail & Related papers (2023-11-08T04:58:29Z) - A Focused Study on Sequence Length for Dialogue Summarization [68.73335643440957]
We analyze the length differences between existing models' outputs and the corresponding human references.
We identify salient features for summary length prediction by comparing different model settings.
Third, we experiment with a length-aware summarizer and show notable improvement on existing models if summary length can be well incorporated.
arXiv Detail & Related papers (2022-09-24T02:49:48Z) - SNaC: Coherence Error Detection for Narrative Summarization [73.48220043216087]
We introduce SNaC, a narrative coherence evaluation framework rooted in fine-grained annotations for long summaries.
We develop a taxonomy of coherence errors in generated narrative summaries and collect span-level annotations for 6.6k sentences across 150 book and movie screenplay summaries.
Our work provides the first characterization of coherence errors generated by state-of-the-art summarization models and a protocol for eliciting coherence judgments from crowd annotators.
arXiv Detail & Related papers (2022-05-19T16:01:47Z) - COINS: Dynamically Generating COntextualized Inference Rules for
Narrative Story Completion [16.676036625561057]
We present COINS, a framework that iteratively reads context sentences, generates contextualized inference rules, encodes them, and guides task-specific output generation.
By modularizing inference and sentence generation steps in a recurrent model, we aim to make reasoning steps and their effects on next sentence generation transparent.
Our automatic and manual evaluations show that the model generates better story sentences than SOTA baselines, especially in terms of coherence.
arXiv Detail & Related papers (2021-06-04T14:06:33Z) - Inferring the Reader: Guiding Automated Story Generation with
Commonsense Reasoning [12.264880519328353]
We introduce Commonsense-inference Augmented neural StoryTelling (CAST), a framework for introducing commonsense reasoning into the generation process.
We find that our CAST method produces significantly more coherent, on-topic, enjoyable and fluent stories than existing models in both the single-character and two-character settings.
arXiv Detail & Related papers (2021-05-04T06:40:33Z) - Consistency and Coherency Enhanced Story Generation [35.08911595854691]
We propose a two-stage generation framework to enhance consistency and coherency of generated stories.
The first stage is to organize the story outline which depicts the story plots and events, and the second stage is to expand the outline into a complete story.
In addition, coreference supervision signals are incorporated to reduce coreference errors and improve the coreference consistency.
arXiv Detail & Related papers (2020-10-17T16:40:37Z) - Summarize, Outline, and Elaborate: Long-Text Generation via Hierarchical
Supervision from Extractive Summaries [46.183289748907804]
We propose SOE, a pipelined system that outlines, outlining and elaborating for long text generation.
SOE produces long texts with significantly better quality, along with faster convergence speed.
arXiv Detail & Related papers (2020-10-14T13:22:20Z) - Multi-View Sequence-to-Sequence Models with Conversational Structure for
Abstractive Dialogue Summarization [72.54873655114844]
Text summarization is one of the most challenging and interesting problems in NLP.
This work proposes a multi-view sequence-to-sequence model by first extracting conversational structures of unstructured daily chats from different views to represent conversations.
Experiments on a large-scale dialogue summarization corpus demonstrated that our methods significantly outperformed previous state-of-the-art models via both automatic evaluations and human judgment.
arXiv Detail & Related papers (2020-10-04T20:12:44Z) - Content Planning for Neural Story Generation with Aristotelian Rescoring [39.07607377794395]
Long-form narrative text manages a fluent impersonation of human writing, but only at the local sentence level, and lacks structure or global cohesion.
We posit that many of the problems of story generation can be addressed via high-quality content planning, and present a system that focuses on how to learn good plot structures to guide story generation.
arXiv Detail & Related papers (2020-09-21T13:41:32Z) - The Shmoop Corpus: A Dataset of Stories with Loosely Aligned Summaries [72.48439126769627]
We introduce the Shmoop Corpus: a dataset of 231 stories paired with detailed multi-paragraph summaries for each individual chapter.
From the corpus, we construct a set of common NLP tasks, including Cloze-form question answering and a simplified form of abstractive summarization.
We believe that the unique structure of this corpus provides an important foothold towards making machine story comprehension more approachable.
arXiv Detail & Related papers (2019-12-30T21:03:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.