Bringing Stories Alive: Generating Interactive Fiction Worlds
- URL: http://arxiv.org/abs/2001.10161v1
- Date: Tue, 28 Jan 2020 04:13:05 GMT
- Title: Bringing Stories Alive: Generating Interactive Fiction Worlds
- Authors: Prithviraj Ammanabrolu, Wesley Cheung, Dan Tu, William Broniec, Mark
O. Riedl
- Abstract summary: We focus on procedurally generating interactive fiction worlds that players "see" and "talk to" using natural language.
We present a method that first extracts a partial knowledge graph encoding basic information regarding world structure.
This knowledge graph is then automatically completed utilizing thematic knowledge and used to guide a neural language generation model.
- Score: 19.125250090589397
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: World building forms the foundation of any task that requires narrative
intelligence. In this work, we focus on procedurally generating interactive
fiction worlds---text-based worlds that players "see" and "talk to" using
natural language. Generating these worlds requires referencing everyday and
thematic commonsense priors in addition to being semantically consistent,
interesting, and coherent throughout. Using existing story plots as
inspiration, we present a method that first extracts a partial knowledge graph
encoding basic information regarding world structure such as locations and
objects. This knowledge graph is then automatically completed utilizing
thematic knowledge and used to guide a neural language generation model that
fleshes out the rest of the world. We perform human participant-based
evaluations, testing our neural model's ability to extract and fill-in a
knowledge graph and to generate language conditioned on it against rule-based
and human-made baselines. Our code is available at
https://github.com/rajammanabrolu/WorldGeneration.
Related papers
- Learning to Model the World with Language [100.76069091703505]
To interact with humans and act in the world, agents need to understand the range of language that people use and relate it to the visual world.
Our key idea is that agents should interpret such diverse language as a signal that helps them predict the future.
We instantiate this in Dynalang, an agent that learns a multimodal world model to predict future text and image representations.
arXiv Detail & Related papers (2023-07-31T17:57:49Z) - ScriptWorld: Text Based Environment For Learning Procedural Knowledge [2.0491741153610334]
ScriptWorld is a text-based environment for teaching agents about real-world daily chores.
We provide gaming environments for 10 daily activities and perform a detailed analysis of the proposed environment.
We leverage features obtained from pre-trained language models in the RL agents.
arXiv Detail & Related papers (2023-07-08T05:43:03Z) - Learning to Imagine: Visually-Augmented Natural Language Generation [73.65760028876943]
We propose a method to make pre-trained language models (PLMs) Learn to Imagine for Visuallyaugmented natural language gEneration.
We use a diffusion model to synthesize high-quality images conditioned on the input texts.
We conduct synthesis for each sentence rather than generate only one image for an entire paragraph.
arXiv Detail & Related papers (2023-05-26T13:59:45Z) - Infusing Commonsense World Models with Graph Knowledge [89.27044249858332]
We study the setting of generating narratives in an open world text adventure game.
A graph representation of the underlying game state can be used to train models that consume and output both grounded graph representations and natural language descriptions and actions.
arXiv Detail & Related papers (2023-01-13T19:58:27Z) - Robust Preference Learning for Storytelling via Contrastive
Reinforcement Learning [53.92465205531759]
Controlled automated story generation seeks to generate natural language stories satisfying constraints from natural language critiques or preferences.
We train a contrastive bi-encoder model to align stories with human critiques, building a general purpose preference model.
We further fine-tune the contrastive reward model using a prompt-learning technique to increase story generation robustness.
arXiv Detail & Related papers (2022-10-14T13:21:33Z) - Do As I Can, Not As I Say: Grounding Language in Robotic Affordances [119.29555551279155]
Large language models can encode a wealth of semantic knowledge about the world.
Such knowledge could be extremely useful to robots aiming to act upon high-level, temporally extended instructions expressed in natural language.
We show how low-level skills can be combined with large language models so that the language model provides high-level knowledge about the procedures for performing complex and temporally-extended instructions.
arXiv Detail & Related papers (2022-04-04T17:57:11Z) - Towards Zero-shot Language Modeling [90.80124496312274]
We construct a neural model that is inductively biased towards learning human languages.
We infer this distribution from a sample of typologically diverse training languages.
We harness additional language-specific side information as distant supervision for held-out languages.
arXiv Detail & Related papers (2021-08-06T23:49:18Z) - Learning Knowledge Graph-based World Models of Textual Environments [16.67845396797253]
This work focuses on the task of building world models of text-based game environments.
Our world model learns to simultaneously: (1) predict changes in the world caused by an agent's actions when representing the world as a knowledge graph; and (2) generate the set of contextually relevant natural language actions required to operate in the world.
arXiv Detail & Related papers (2021-06-17T15:45:54Z) - Modeling Worlds in Text [16.67845396797253]
We provide a dataset that enables the creation of learning agents that can build knowledge graph-based world models of interactive narratives.
Our dataset provides 24198 mappings between rich natural language observations and knowledge graphs.
The training data is collected across 27 games in multiple genres and contains a further 7836 heldout instances over 9 additional games in the test set.
arXiv Detail & Related papers (2021-06-17T15:02:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.