Content Planning for Neural Story Generation with Aristotelian Rescoring
- URL: http://arxiv.org/abs/2009.09870v2
- Date: Fri, 9 Oct 2020 16:28:23 GMT
- Title: Content Planning for Neural Story Generation with Aristotelian Rescoring
- Authors: Seraphina Goldfarb-Tarrant, Tuhin Chakrabarty, Ralph Weischedel,
Nanyun Peng
- Abstract summary: Long-form narrative text manages a fluent impersonation of human writing, but only at the local sentence level, and lacks structure or global cohesion.
We posit that many of the problems of story generation can be addressed via high-quality content planning, and present a system that focuses on how to learn good plot structures to guide story generation.
- Score: 39.07607377794395
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Long-form narrative text generated from large language models manages a
fluent impersonation of human writing, but only at the local sentence level,
and lacks structure or global cohesion. We posit that many of the problems of
story generation can be addressed via high-quality content planning, and
present a system that focuses on how to learn good plot structures to guide
story generation. We utilize a plot-generation language model along with an
ensemble of rescoring models that each implement an aspect of good
story-writing as detailed in Aristotle's Poetics. We find that stories written
with our more principled plot-structure are both more relevant to a given
prompt and higher quality than baselines that do not content plan, or that plan
in an unprincipled way.
Related papers
- Generating Visual Stories with Grounded and Coreferent Characters [63.07511918366848]
We present the first model capable of predicting visual stories with consistently grounded and coreferent character mentions.
Our model is finetuned on a new dataset which we build on top of the widely used VIST benchmark.
We also propose new evaluation metrics to measure the richness of characters and coreference in stories.
arXiv Detail & Related papers (2024-09-20T14:56:33Z) - Visual Storytelling with Question-Answer Plans [70.89011289754863]
We present a novel framework which integrates visual representations with pretrained language models and planning.
Our model translates the image sequence into a visual prefix, a sequence of continuous embeddings which language models can interpret.
It also leverages a sequence of question-answer pairs as a blueprint plan for selecting salient visual concepts and determining how they should be assembled into a narrative.
arXiv Detail & Related papers (2023-10-08T21:45:34Z) - Little Red Riding Hood Goes Around the Globe:Crosslingual Story Planning and Generation with Large Language Models [69.60579227637399]
Previous work has demonstrated the effectiveness of planning for story generation exclusively in a monolingual setting focusing primarily on English.
We propose a new task of cross-lingual story generation with planning and present a new dataset for this task.
arXiv Detail & Related papers (2022-12-20T17:42:16Z) - Neural Story Planning [8.600049807193413]
We present an approach to story plot generation that unifies causal planning with neural language models.
Our system infers the preconditions for events in the story and then events that will cause those conditions to become true.
Results indicate that our proposed method produces more coherent plotlines than several strong baselines.
arXiv Detail & Related papers (2022-12-16T21:29:41Z) - Plot Writing From Pre-Trained Language Models [3.592350589927261]
Pre-trained language models (PLMs) fail to generate long-form narrative text because they do not consider global structure.
Recent work in story generation reintroduced explicit content planning in the form of prompts, keywords, or semantic frames.
We propose generating story plots using off-the-shelf PLMs while maintaining the benefit of content planning to generate cohesive and contentful stories.
arXiv Detail & Related papers (2022-06-07T05:30:46Z) - Goal-Directed Story Generation: Augmenting Generative Language Models
with Reinforcement Learning [7.514717103747824]
We present two automated techniques grounded in deep reinforcement learning and reward shaping to control the plot of computer-generated stories.
The first utilizes proximal policy optimization to fine-tune an existing transformer-based language model to generate text continuations but also be goal-seeking.
The second extracts a knowledge graph from the unfolding story, which is used by a policy network with graph attention to select a candidate continuation generated by a language model.
arXiv Detail & Related papers (2021-12-16T03:34:14Z) - Stylized Story Generation with Style-Guided Planning [38.791298336259146]
We propose a new task, stylized story gen-eration, namely generating stories with speci-fied style given a leading context.
Our model can controllably generateemo-tion-driven or event-driven stories based on the ROCStories dataset.
arXiv Detail & Related papers (2021-05-18T15:55:38Z) - PlotMachines: Outline-Conditioned Generation with Dynamic Plot State
Tracking [128.76063992147016]
We present PlotMachines, a neural narrative model that learns to transform an outline into a coherent story by tracking the dynamic plot states.
In addition, we enrich PlotMachines with high-level discourse structure so that the model can learn different writing styles corresponding to different parts of the narrative.
arXiv Detail & Related papers (2020-04-30T17:16:31Z) - A Knowledge-Enhanced Pretraining Model for Commonsense Story Generation [98.25464306634758]
We propose to utilize commonsense knowledge from external knowledge bases to generate reasonable stories.
We employ multi-task learning which combines a discriminative objective to distinguish true and fake stories.
Our model can generate more reasonable stories than state-of-the-art baselines, particularly in terms of logic and global coherence.
arXiv Detail & Related papers (2020-01-15T05:42:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.