Improving Pacing in Long-Form Story Planning
- URL: http://arxiv.org/abs/2311.04459v1
- Date: Wed, 8 Nov 2023 04:58:29 GMT
- Title: Improving Pacing in Long-Form Story Planning
- Authors: Yichen Wang, Kevin Yang, Xiaoming Liu, Dan Klein
- Abstract summary: We propose a CONCrete Outline ConTrol system to improve pacing when automatically generating story outlines.
We first train a concreteness evaluator to judge which of two events is more concrete.
In this work, we explore a vaguest-first expansion procedure that aims for uniform pacing.
- Score: 55.39443681232538
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing LLM-based systems for writing long-form stories or story outlines
frequently suffer from unnatural pacing, whether glossing over important events
or over-elaborating on insignificant details, resulting in a jarring experience
for the reader. We propose a CONCrete Outline ConTrol (CONCOCT) system to
improve pacing when automatically generating story outlines. We first train a
concreteness evaluator to judge which of two events is more concrete
(low-level-detailed). This evaluator can then be used to control pacing in
hierarchical outline generation; in this work, we explore a vaguest-first
expansion procedure that aims for uniform pacing. We further use the evaluator
to filter new outline items based on predicted concreteness. Compared to a
baseline hierarchical outline generator, humans judge CONCOCT's pacing to be
more consistent over 57% of the time across multiple outline lengths; the gains
also translate to downstream stories. All code, data, and models are
open-sourced.
Related papers
- Generating Visual Stories with Grounded and Coreferent Characters [63.07511918366848]
We present the first model capable of predicting visual stories with consistently grounded and coreferent character mentions.
Our model is finetuned on a new dataset which we build on top of the widely used VIST benchmark.
We also propose new evaluation metrics to measure the richness of characters and coreference in stories.
arXiv Detail & Related papers (2024-09-20T14:56:33Z) - LongStory: Coherent, Complete and Length Controlled Long story
Generation [18.886499970698285]
We present the LongStory for coherent, complete, and length-controlled long story generation.
LongStory introduces two novel methodologies: (1) the long and short-term contexts weight calibrator (CWC) and (2) long story structural positions (LSP).
arXiv Detail & Related papers (2023-11-26T06:24:25Z) - EIPE-text: Evaluation-Guided Iterative Plan Extraction for Long-Form
Narrative Text Generation [114.50719922069261]
We propose a new framework called Evaluation-guided Iterative Plan Extraction for long-form narrative text generation (EIPE-text)
EIPE-text has three stages: plan extraction, learning, and inference.
We evaluate the effectiveness of EIPE-text in the domains of novels and storytelling.
arXiv Detail & Related papers (2023-10-12T10:21:37Z) - Re3: Generating Longer Stories With Recursive Reprompting and Revision [83.99558005056817]
We consider the problem of automatically generating longer stories of over two thousand words.
Compared to prior work on shorter stories, long-range plot coherence and relevance are more central challenges here.
We propose the Recursive Reprompting and Revision framework (Re3) to address these challenges.
arXiv Detail & Related papers (2022-10-13T06:29:57Z) - Plot Writing From Pre-Trained Language Models [3.592350589927261]
Pre-trained language models (PLMs) fail to generate long-form narrative text because they do not consider global structure.
Recent work in story generation reintroduced explicit content planning in the form of prompts, keywords, or semantic frames.
We propose generating story plots using off-the-shelf PLMs while maintaining the benefit of content planning to generate cohesive and contentful stories.
arXiv Detail & Related papers (2022-06-07T05:30:46Z) - Consistency and Coherency Enhanced Story Generation [35.08911595854691]
We propose a two-stage generation framework to enhance consistency and coherency of generated stories.
The first stage is to organize the story outline which depicts the story plots and events, and the second stage is to expand the outline into a complete story.
In addition, coreference supervision signals are incorporated to reduce coreference errors and improve the coreference consistency.
arXiv Detail & Related papers (2020-10-17T16:40:37Z) - Summarize, Outline, and Elaborate: Long-Text Generation via Hierarchical
Supervision from Extractive Summaries [46.183289748907804]
We propose SOE, a pipelined system that outlines, outlining and elaborating for long text generation.
SOE produces long texts with significantly better quality, along with faster convergence speed.
arXiv Detail & Related papers (2020-10-14T13:22:20Z) - Screenplay Summarization Using Latent Narrative Structure [78.45316339164133]
We propose to explicitly incorporate the underlying structure of narratives into general unsupervised and supervised extractive summarization models.
We formalize narrative structure in terms of key narrative events (turning points) and treat it as latent in order to summarize screenplays.
Experimental results on the CSI corpus of TV screenplays, which we augment with scene-level summarization labels, show that latent turning points correlate with important aspects of a CSI episode.
arXiv Detail & Related papers (2020-04-27T11:54:19Z) - The Shmoop Corpus: A Dataset of Stories with Loosely Aligned Summaries [72.48439126769627]
We introduce the Shmoop Corpus: a dataset of 231 stories paired with detailed multi-paragraph summaries for each individual chapter.
From the corpus, we construct a set of common NLP tasks, including Cloze-form question answering and a simplified form of abstractive summarization.
We believe that the unique structure of this corpus provides an important foothold towards making machine story comprehension more approachable.
arXiv Detail & Related papers (2019-12-30T21:03:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.