Consistency and Coherency Enhanced Story Generation
- URL: http://arxiv.org/abs/2010.08822v1
- Date: Sat, 17 Oct 2020 16:40:37 GMT
- Title: Consistency and Coherency Enhanced Story Generation
- Authors: Wei Wang, Piji Li, Hai-Tao Zheng
- Abstract summary: We propose a two-stage generation framework to enhance consistency and coherency of generated stories.
The first stage is to organize the story outline which depicts the story plots and events, and the second stage is to expand the outline into a complete story.
In addition, coreference supervision signals are incorporated to reduce coreference errors and improve the coreference consistency.
- Score: 35.08911595854691
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Story generation is a challenging task, which demands to maintain consistency
of the plots and characters throughout the story. Previous works have shown
that GPT2, a large-scale language model, has achieved good performance on story
generation. However, we observe that several serious issues still exist in the
stories generated by GPT2 which can be categorized into two folds: consistency
and coherency. In terms of consistency, on one hand, GPT2 cannot guarantee the
consistency of the plots explicitly. On the other hand, the generated stories
usually contain coreference errors. In terms of coherency, GPT2 does not take
account of the discourse relations between sentences of stories directly. To
enhance the consistency and coherency of the generated stories, we propose a
two-stage generation framework, where the first stage is to organize the story
outline which depicts the story plots and events, and the second stage is to
expand the outline into a complete story. Therefore the plots consistency can
be controlled and guaranteed explicitly. In addition, coreference supervision
signals are incorporated to reduce coreference errors and improve the
coreference consistency. Moreover, we design an auxiliary task of discourse
relation modeling to improve the coherency of the generated stories.
Experimental results on a story dataset show that our model outperforms the
baseline approaches in terms of both automatic metrics and human evaluation.
Related papers
- Generating Visual Stories with Grounded and Coreferent Characters [63.07511918366848]
We present the first model capable of predicting visual stories with consistently grounded and coreferent character mentions.
Our model is finetuned on a new dataset which we build on top of the widely used VIST benchmark.
We also propose new evaluation metrics to measure the richness of characters and coreference in stories.
arXiv Detail & Related papers (2024-09-20T14:56:33Z) - Advancing Precise Outline-Conditioned Text Generation with Task Duality
and Explicit Outline Control [15.881568820009797]
We introduce a novel text generation task called Precise Outline-conditioned Generation.
This task requires generating stories based on specific, sentence-level outlines.
We propose an explicit outline utilization control approach and a novel framework that leverages the task duality between summarization and generation.
arXiv Detail & Related papers (2023-05-23T18:33:52Z) - Re3: Generating Longer Stories With Recursive Reprompting and Revision [83.99558005056817]
We consider the problem of automatically generating longer stories of over two thousand words.
Compared to prior work on shorter stories, long-range plot coherence and relevance are more central challenges here.
We propose the Recursive Reprompting and Revision framework (Re3) to address these challenges.
arXiv Detail & Related papers (2022-10-13T06:29:57Z) - StoryDALL-E: Adapting Pretrained Text-to-Image Transformers for Story
Continuation [76.44802273236081]
We develop a model StoryDALL-E for story continuation, where the generated visual story is conditioned on a source image.
We show that our retro-fitting approach outperforms GAN-based models for story continuation and facilitates copying of visual elements from the source image.
Overall, our work demonstrates that pretrained text-to-image synthesis models can be adapted for complex and low-resource tasks like story continuation.
arXiv Detail & Related papers (2022-09-13T17:47:39Z) - Every picture tells a story: Image-grounded controllable stylistic story
generation [39.468435527606985]
We introduce Plug-and-Play Story Teller (PPST) to improve image-to-story generation.
We conduct image-to-story generation experiments with non-styled, romance-styled, and action-styled PPST approaches.
The results show that PPST improves story coherence and has better image-story relevance, but has yet to be adequately stylistic.
arXiv Detail & Related papers (2022-09-04T15:07:53Z) - COINS: Dynamically Generating COntextualized Inference Rules for
Narrative Story Completion [16.676036625561057]
We present COINS, a framework that iteratively reads context sentences, generates contextualized inference rules, encodes them, and guides task-specific output generation.
By modularizing inference and sentence generation steps in a recurrent model, we aim to make reasoning steps and their effects on next sentence generation transparent.
Our automatic and manual evaluations show that the model generates better story sentences than SOTA baselines, especially in terms of coherence.
arXiv Detail & Related papers (2021-06-04T14:06:33Z) - Improving Generation and Evaluation of Visual Stories via Semantic
Consistency [72.00815192668193]
Given a series of natural language captions, an agent must generate a sequence of images that correspond to the captions.
Prior work has introduced recurrent generative models which outperform synthesis text-to-image models on this task.
We present a number of improvements to prior modeling approaches, including the addition of a dual learning framework.
arXiv Detail & Related papers (2021-05-20T20:42:42Z) - Stylized Story Generation with Style-Guided Planning [38.791298336259146]
We propose a new task, stylized story gen-eration, namely generating stories with speci-fied style given a leading context.
Our model can controllably generateemo-tion-driven or event-driven stories based on the ROCStories dataset.
arXiv Detail & Related papers (2021-05-18T15:55:38Z) - Inferring the Reader: Guiding Automated Story Generation with
Commonsense Reasoning [12.264880519328353]
We introduce Commonsense-inference Augmented neural StoryTelling (CAST), a framework for introducing commonsense reasoning into the generation process.
We find that our CAST method produces significantly more coherent, on-topic, enjoyable and fluent stories than existing models in both the single-character and two-character settings.
arXiv Detail & Related papers (2021-05-04T06:40:33Z) - Narrative Interpolation for Generating and Understanding Stories [52.463747140762145]
We propose a method for controlled narrative/story generation where we are able to guide the model to produce coherent narratives with user-specified target endings.
The core of our method is an incrementally model based on GPT-2 which conditions on a previous sentence and a next sentence in a narrative and fills in the gap.
We show that ending-guided generation results in narratives which are coherent, faithful to the given ending guide, and require less manual effort on the part of the human guide writer than past approaches.
arXiv Detail & Related papers (2020-08-17T16:45:50Z) - PlotMachines: Outline-Conditioned Generation with Dynamic Plot State
Tracking [128.76063992147016]
We present PlotMachines, a neural narrative model that learns to transform an outline into a coherent story by tracking the dynamic plot states.
In addition, we enrich PlotMachines with high-level discourse structure so that the model can learn different writing styles corresponding to different parts of the narrative.
arXiv Detail & Related papers (2020-04-30T17:16:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.