Generating Narrative Text in a Switching Dynamical System
- URL: http://arxiv.org/abs/2004.03762v1
- Date: Wed, 8 Apr 2020 01:05:19 GMT
- Title: Generating Narrative Text in a Switching Dynamical System
- Authors: Noah Weber, Leena Shekhar, Heeyoung Kwon, Niranjan Balasubramanian,
Nathanael Chambers
- Abstract summary: We formalize narrative modeling as a Switching Linear Dynamical System (SLDS)
A SLDS is a dynamical system in which the latent dynamics of the system is controlled by top-level discrete switching variables.
We derive a Gibbs sampler for our model that can fill in arbitrary parts of the narrative, guided by the switching variables.
- Score: 20.583487756067022
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Early work on narrative modeling used explicit plans and goals to generate
stories, but the language generation itself was restricted and inflexible.
Modern methods use language models for more robust generation, but often lack
an explicit representation of the scaffolding and dynamics that guide a
coherent narrative. This paper introduces a new model that integrates explicit
narrative structure with neural language models, formalizing narrative modeling
as a Switching Linear Dynamical System (SLDS). A SLDS is a dynamical system in
which the latent dynamics of the system (i.e. how the state vector transforms
over time) is controlled by top-level discrete switching variables. The
switching variables represent narrative structure (e.g., sentiment or discourse
states), while the latent state vector encodes information on the current state
of the narrative. This probabilistic formulation allows us to control
generation, and can be learned in a semi-supervised fashion using both labeled
and unlabeled data. Additionally, we derive a Gibbs sampler for our model that
can fill in arbitrary parts of the narrative, guided by the switching
variables. Our filled-in (English language) narratives outperform several
baselines on both automatic and human evaluations.
Related papers
- Language Model Pre-Training with Sparse Latent Typing [66.75786739499604]
We propose a new pre-training objective, Sparse Latent Typing, which enables the model to sparsely extract sentence-level keywords with diverse latent types.
Experimental results show that our model is able to learn interpretable latent type categories in a self-supervised manner without using any external knowledge.
arXiv Detail & Related papers (2022-10-23T00:37:08Z) - Robust Preference Learning for Storytelling via Contrastive
Reinforcement Learning [53.92465205531759]
Controlled automated story generation seeks to generate natural language stories satisfying constraints from natural language critiques or preferences.
We train a contrastive bi-encoder model to align stories with human critiques, building a general purpose preference model.
We further fine-tune the contrastive reward model using a prompt-learning technique to increase story generation robustness.
arXiv Detail & Related papers (2022-10-14T13:21:33Z) - Generating Coherent Narratives by Learning Dynamic and Discrete Entity
States with a Contrastive Framework [68.1678127433077]
We extend the Transformer model to dynamically conduct entity state updates and sentence realization for narrative generation.
Experiments on two narrative datasets show that our model can generate more coherent and diverse narratives than strong baselines.
arXiv Detail & Related papers (2022-08-08T09:02:19Z) - Collocation2Text: Controllable Text Generation from Guide Phrases in
Russian [0.0]
Collocation2Text is a plug-and-play method for automatic controllable text generation in Russian.
The method is based on two interacting models: the autoregressive language ruGPT-3 model and the autoencoding language ruRoBERTa model.
Experiments on generating news articles using the proposed method showed its effectiveness for automatically generated fluent texts.
arXiv Detail & Related papers (2022-06-18T17:10:08Z) - Goal-Directed Story Generation: Augmenting Generative Language Models
with Reinforcement Learning [7.514717103747824]
We present two automated techniques grounded in deep reinforcement learning and reward shaping to control the plot of computer-generated stories.
The first utilizes proximal policy optimization to fine-tune an existing transformer-based language model to generate text continuations but also be goal-seeking.
The second extracts a knowledge graph from the unfolding story, which is used by a policy network with graph attention to select a candidate continuation generated by a language model.
arXiv Detail & Related papers (2021-12-16T03:34:14Z) - Inferring the Reader: Guiding Automated Story Generation with
Commonsense Reasoning [12.264880519328353]
We introduce Commonsense-inference Augmented neural StoryTelling (CAST), a framework for introducing commonsense reasoning into the generation process.
We find that our CAST method produces significantly more coherent, on-topic, enjoyable and fluent stories than existing models in both the single-character and two-character settings.
arXiv Detail & Related papers (2021-05-04T06:40:33Z) - Topic Adaptation and Prototype Encoding for Few-Shot Visual Storytelling [81.33107307509718]
We propose a topic adaptive storyteller to model the ability of inter-topic generalization.
We also propose a prototype encoding structure to model the ability of intra-topic derivation.
Experimental results show that topic adaptation and prototype encoding structure mutually bring benefit to the few-shot model.
arXiv Detail & Related papers (2020-08-11T03:55:11Z) - PlotMachines: Outline-Conditioned Generation with Dynamic Plot State
Tracking [128.76063992147016]
We present PlotMachines, a neural narrative model that learns to transform an outline into a coherent story by tracking the dynamic plot states.
In addition, we enrich PlotMachines with high-level discourse structure so that the model can learn different writing styles corresponding to different parts of the narrative.
arXiv Detail & Related papers (2020-04-30T17:16:31Z) - Temporal Embeddings and Transformer Models for Narrative Text
Understanding [72.88083067388155]
We present two approaches to narrative text understanding for character relationship modelling.
The temporal evolution of these relations is described by dynamic word embeddings, that are designed to learn semantic changes over time.
A supervised learning approach based on the state-of-the-art transformer model BERT is used instead to detect static relations between characters.
arXiv Detail & Related papers (2020-03-19T14:23:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.