Plot Writing From Pre-Trained Language Models
- URL: http://arxiv.org/abs/2206.03021v1
- Date: Tue, 7 Jun 2022 05:30:46 GMT
- Title: Plot Writing From Pre-Trained Language Models
- Authors: Yiping Jin, Vishakha Kadam, Dittaya Wanvarie
- Abstract summary: Pre-trained language models (PLMs) fail to generate long-form narrative text because they do not consider global structure.
Recent work in story generation reintroduced explicit content planning in the form of prompts, keywords, or semantic frames.
We propose generating story plots using off-the-shelf PLMs while maintaining the benefit of content planning to generate cohesive and contentful stories.
- Score: 3.592350589927261
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Pre-trained language models (PLMs) fail to generate long-form narrative text
because they do not consider global structure. As a result, the generated texts
are often incohesive, repetitive, or lack content. Recent work in story
generation reintroduced explicit content planning in the form of prompts,
keywords, or semantic frames. Trained on large parallel corpora, these models
can generate more logical event sequences and thus more contentful stories.
However, these intermediate representations are often not in natural language
and cannot be utilized by PLMs without fine-tuning. We propose generating story
plots using off-the-shelf PLMs while maintaining the benefit of content
planning to generate cohesive and contentful stories. Our proposed method,
ScratchPlot, first prompts a PLM to compose a content plan. Then, we generate
the story's body and ending conditioned on the content plan. Furthermore, we
take a generate-and-rank approach by using additional PLMs to rank the
generated (story, ending) pairs. We benchmark our method with various baselines
and achieved superior results in both human and automatic evaluation.
Related papers
- MoPS: Modular Story Premise Synthesis for Open-Ended Automatic Story Generation [50.01780173691132]
We introduce Modular Story Premise Synthesis (MoPS)
MoPS breaks down story premises into modules like background and persona for automated design and generation.
Thorough evaluations demonstrate that our synthesized premises excel in diversity, fascination, completeness, and originality.
arXiv Detail & Related papers (2024-06-09T08:31:14Z) - Guiding and Diversifying LLM-Based Story Generation via Answer Set Programming [1.7889842797216124]
Large language models (LLMs) are capable of generating stories in response to open-ended user requests.
We propose using a higher-level and more abstract symbolic specification of high-level story structure to guide and diversify story generation.
arXiv Detail & Related papers (2024-06-01T21:14:25Z) - Text-Blueprint: An Interactive Platform for Plan-based Conditional
Generation [84.95981645040281]
Planning can be a useful intermediate step to render conditional generation less opaque and more grounded.
We present a web browser-based demonstration for query-focused summarization that uses a sequence of question-answer pairs.
arXiv Detail & Related papers (2023-04-28T18:14:48Z) - Robust Preference Learning for Storytelling via Contrastive
Reinforcement Learning [53.92465205531759]
Controlled automated story generation seeks to generate natural language stories satisfying constraints from natural language critiques or preferences.
We train a contrastive bi-encoder model to align stories with human critiques, building a general purpose preference model.
We further fine-tune the contrastive reward model using a prompt-learning technique to increase story generation robustness.
arXiv Detail & Related papers (2022-10-14T13:21:33Z) - Few-Shot Table-to-Text Generation with Prefix-Controlled Generator [11.891732582638227]
We propose a prompt-based approach, Prefix-Controlled Generator (i.e., PCG), for few-shot table-to-text generation.
We prepend a task-specific prefix for a PLM to make the table structure better fit the pre-trained input.
In addition, we generate an input-specific prefix to control the factual contents and word order of the generated text.
arXiv Detail & Related papers (2022-08-23T03:23:26Z) - Context-Tuning: Learning Contextualized Prompts for Natural Language
Generation [52.835877179365525]
We propose a novel continuous prompting approach, called Context-Tuning, to fine-tuning PLMs for natural language generation.
Firstly, the prompts are derived based on the input text, so that they can elicit useful knowledge from PLMs for generation.
Secondly, to further enhance the relevance of the generated text to the inputs, we utilize continuous inverse prompting to refine the process of natural language generation.
arXiv Detail & Related papers (2022-01-21T12:35:28Z) - A Survey of Pretrained Language Models Based Text Generation [97.64625999380425]
Text Generation aims to produce plausible and readable text in human language from input data.
Deep learning has greatly advanced this field by neural generation models, especially the paradigm of pretrained language models (PLMs)
Grounding text generation on PLMs is seen as a promising direction in both academia and industry.
arXiv Detail & Related papers (2022-01-14T01:44:58Z) - Goal-Directed Story Generation: Augmenting Generative Language Models
with Reinforcement Learning [7.514717103747824]
We present two automated techniques grounded in deep reinforcement learning and reward shaping to control the plot of computer-generated stories.
The first utilizes proximal policy optimization to fine-tune an existing transformer-based language model to generate text continuations but also be goal-seeking.
The second extracts a knowledge graph from the unfolding story, which is used by a policy network with graph attention to select a candidate continuation generated by a language model.
arXiv Detail & Related papers (2021-12-16T03:34:14Z) - Plan ahead: Self-Supervised Text Planning for Paragraph Completion Task [14.483791451578007]
We propose a self-supervised text planner SSPlanner that predicts what to say first.
It then guides the pretrained language model (surface realization) using the predicted content.
We also find that a combination of noun and verb types of keywords is the most effective for content selection.
arXiv Detail & Related papers (2020-10-11T02:38:21Z) - Content Planning for Neural Story Generation with Aristotelian Rescoring [39.07607377794395]
Long-form narrative text manages a fluent impersonation of human writing, but only at the local sentence level, and lacks structure or global cohesion.
We posit that many of the problems of story generation can be addressed via high-quality content planning, and present a system that focuses on how to learn good plot structures to guide story generation.
arXiv Detail & Related papers (2020-09-21T13:41:32Z) - PALM: Pre-training an Autoencoding&Autoregressive Language Model for
Context-conditioned Generation [92.7366819044397]
Self-supervised pre-training has emerged as a powerful technique for natural language understanding and generation.
This work presents PALM with a novel scheme that jointly pre-trains an autoencoding and autoregressive language model on a large unlabeled corpus.
An extensive set of experiments show that PALM achieves new state-of-the-art results on a variety of language generation benchmarks.
arXiv Detail & Related papers (2020-04-14T06:25:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.