Stylized Story Generation with Style-Guided Planning
- URL: http://arxiv.org/abs/2105.08625v2
- Date: Wed, 19 May 2021 15:54:05 GMT
- Title: Stylized Story Generation with Style-Guided Planning
- Authors: Xiangzhe Kong, Jialiang Huang, Ziquan Tung, Jian Guan and Minlie Huang
- Abstract summary: We propose a new task, stylized story gen-eration, namely generating stories with speci-fied style given a leading context.
Our model can controllably generateemo-tion-driven or event-driven stories based on the ROCStories dataset.
- Score: 38.791298336259146
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current storytelling systems focus more ongenerating stories with coherent
plots regard-less of the narration style, which is impor-tant for controllable
text generation. There-fore, we propose a new task, stylized story gen-eration,
namely generating stories with speci-fied style given a leading context. To
tacklethe problem, we propose a novel generationmodel that first plans the
stylized keywordsand then generates the whole story with theguidance of the
keywords. Besides, we pro-pose two automatic metrics to evaluate theconsistency
between the generated story andthe specified style. Experiments
demonstratesthat our model can controllably generateemo-tion-driven
orevent-driven stories based onthe ROCStories dataset (Mostafazadeh et
al.,2016). Our study presents insights for stylizedstory generation in further
research.
Related papers
- Generating Visual Stories with Grounded and Coreferent Characters [63.07511918366848]
We present the first model capable of predicting visual stories with consistently grounded and coreferent character mentions.
Our model is finetuned on a new dataset which we build on top of the widely used VIST benchmark.
We also propose new evaluation metrics to measure the richness of characters and coreference in stories.
arXiv Detail & Related papers (2024-09-20T14:56:33Z) - Visual Storytelling with Question-Answer Plans [70.89011289754863]
We present a novel framework which integrates visual representations with pretrained language models and planning.
Our model translates the image sequence into a visual prefix, a sequence of continuous embeddings which language models can interpret.
It also leverages a sequence of question-answer pairs as a blueprint plan for selecting salient visual concepts and determining how they should be assembled into a narrative.
arXiv Detail & Related papers (2023-10-08T21:45:34Z) - Intelligent Grimm -- Open-ended Visual Storytelling via Latent Diffusion
Models [70.86603627188519]
We focus on a novel, yet challenging task of generating a coherent image sequence based on a given storyline, denoted as open-ended visual storytelling.
We propose a learning-based auto-regressive image generation model, termed as StoryGen, with a novel vision-language context module.
We show StoryGen can generalize to unseen characters without any optimization, and generate image sequences with coherent content and consistent character.
arXiv Detail & Related papers (2023-06-01T17:58:50Z) - Every picture tells a story: Image-grounded controllable stylistic story
generation [39.468435527606985]
We introduce Plug-and-Play Story Teller (PPST) to improve image-to-story generation.
We conduct image-to-story generation experiments with non-styled, romance-styled, and action-styled PPST approaches.
The results show that PPST improves story coherence and has better image-story relevance, but has yet to be adequately stylistic.
arXiv Detail & Related papers (2022-09-04T15:07:53Z) - Goal-Directed Story Generation: Augmenting Generative Language Models
with Reinforcement Learning [7.514717103747824]
We present two automated techniques grounded in deep reinforcement learning and reward shaping to control the plot of computer-generated stories.
The first utilizes proximal policy optimization to fine-tune an existing transformer-based language model to generate text continuations but also be goal-seeking.
The second extracts a knowledge graph from the unfolding story, which is used by a policy network with graph attention to select a candidate continuation generated by a language model.
arXiv Detail & Related papers (2021-12-16T03:34:14Z) - Inferring the Reader: Guiding Automated Story Generation with
Commonsense Reasoning [12.264880519328353]
We introduce Commonsense-inference Augmented neural StoryTelling (CAST), a framework for introducing commonsense reasoning into the generation process.
We find that our CAST method produces significantly more coherent, on-topic, enjoyable and fluent stories than existing models in both the single-character and two-character settings.
arXiv Detail & Related papers (2021-05-04T06:40:33Z) - Outline to Story: Fine-grained Controllable Story Generation from
Cascaded Events [39.577220559911055]
We propose a new task named "Outline to Story" (O2S) as a test bed for fine-grained controllable generation of long text.
We then create datasets for future benchmarks, built by state-of-the-art keyword extraction techniques.
arXiv Detail & Related papers (2021-01-04T08:16:21Z) - Cue Me In: Content-Inducing Approaches to Interactive Story Generation [74.09575609958743]
We focus on the task of interactive story generation, where the user provides the model mid-level sentence abstractions.
We present two content-inducing approaches to effectively incorporate this additional information.
Experimental results from both automatic and human evaluations show that these methods produce more topically coherent and personalized stories.
arXiv Detail & Related papers (2020-10-20T00:36:15Z) - PlotMachines: Outline-Conditioned Generation with Dynamic Plot State
Tracking [128.76063992147016]
We present PlotMachines, a neural narrative model that learns to transform an outline into a coherent story by tracking the dynamic plot states.
In addition, we enrich PlotMachines with high-level discourse structure so that the model can learn different writing styles corresponding to different parts of the narrative.
arXiv Detail & Related papers (2020-04-30T17:16:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.