Multiversal views on language models
- URL: http://arxiv.org/abs/2102.06391v2
- Date: Mon, 15 Feb 2021 05:25:35 GMT
- Title: Multiversal views on language models
- Authors: Laria Reynolds and Kyle McDonell
- Abstract summary: We present a framework in which generative language models are conceptualized as multiverse generators.
This framework also applies to human imagination and is core to how we read and write fiction.
We call for exploration into this commonality through new forms of interfaces which allow humans to couple their imagination to AI to write.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The virtuosity of language models like GPT-3 opens a new world of possibility
for human-AI collaboration in writing. In this paper, we present a framework in
which generative language models are conceptualized as multiverse generators.
This framework also applies to human imagination and is core to how we read and
write fiction. We call for exploration into this commonality through new forms
of interfaces which allow humans to couple their imagination to AI to write,
explore, and understand non-linear fiction. We discuss the early insights we
have gained from actively pursuing this approach by developing and testing a
novel multiversal GPT-3-assisted writing interface.
Related papers
- Beyond Reality: The Pivotal Role of Generative AI in the Metaverse [98.1561456565877]
This paper offers a comprehensive exploration of how generative AI technologies are shaping the Metaverse.
We delve into the applications of text generation models like ChatGPT and GPT-3, which are enhancing conversational interfaces with AI-generated characters.
We also examine the potential of 3D model generation technologies like Point-E and Lumirithmic in creating realistic virtual objects.
arXiv Detail & Related papers (2023-07-28T05:44:20Z) - SciMON: Scientific Inspiration Machines Optimized for Novelty [68.46036589035539]
We explore and enhance the ability of neural language models to generate novel scientific directions grounded in literature.
We take a dramatic departure with a novel setting in which models use as input background contexts.
We present SciMON, a modeling framework that uses retrieval of "inspirations" from past scientific papers.
arXiv Detail & Related papers (2023-05-23T17:12:08Z) - RecurrentGPT: Interactive Generation of (Arbitrarily) Long Text [81.33699837678229]
We introduce RecurrentGPT, a language-based simulacrum of the recurrence mechanism in RNNs.
At each timestep, RecurrentGPT generates a paragraph of text and updates its language-based long-short term memory.
RecurrentGPT is an initial step towards next-generation computer-assisted writing systems.
arXiv Detail & Related papers (2023-05-22T17:58:10Z) - AIwriting: Relations Between Image Generation and Digital Writing [0.0]
During 2022, AI text generation systems such as GPT-3 and AI text-to-image generation systems such as DALL-E 2 made exponential leaps forward.
In this panel a group of electronic literature authors and theorists consider new oppor-tunities for human creativity presented by these systems.
arXiv Detail & Related papers (2023-05-18T09:23:05Z) - Visualize Before You Write: Imagination-Guided Open-Ended Text
Generation [68.96699389728964]
We propose iNLG that uses machine-generated images to guide language models in open-ended text generation.
Experiments and analyses demonstrate the effectiveness of iNLG on open-ended text generation tasks.
arXiv Detail & Related papers (2022-10-07T18:01:09Z) - TEMOS: Generating diverse human motions from textual descriptions [53.85978336198444]
We address the problem of generating diverse 3D human motions from textual descriptions.
We propose TEMOS, a text-conditioned generative model leveraging variational autoencoder (VAE) training with human motion data.
We show that TEMOS framework can produce both skeleton-based animations as in prior work, as well more expressive SMPL body motions.
arXiv Detail & Related papers (2022-04-25T14:53:06Z) - CoAuthor: Designing a Human-AI Collaborative Writing Dataset for
Exploring Language Model Capabilities [92.79451009324268]
We present CoAuthor, a dataset designed for revealing GPT-3's capabilities in assisting creative and argumentative writing.
We demonstrate that CoAuthor can address questions about GPT-3's language, ideation, and collaboration capabilities.
We discuss how this work may facilitate a more principled discussion around LMs' promises and pitfalls in relation to interaction design.
arXiv Detail & Related papers (2022-01-18T07:51:57Z) - Wordcraft: a Human-AI Collaborative Editor for Story Writing [10.028560442375914]
We propose Wordcraft, an AI-assisted editor for story writing in which a writer and a dialog system collaborate to write a story.
Our novel interface uses few-shot learning and the natural affordances of conversation to support a variety of interactions.
arXiv Detail & Related papers (2021-07-15T16:18:27Z) - Bringing Stories Alive: Generating Interactive Fiction Worlds [19.125250090589397]
We focus on procedurally generating interactive fiction worlds that players "see" and "talk to" using natural language.
We present a method that first extracts a partial knowledge graph encoding basic information regarding world structure.
This knowledge graph is then automatically completed utilizing thematic knowledge and used to guide a neural language generation model.
arXiv Detail & Related papers (2020-01-28T04:13:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.