Computational Lens on Cognition: Study Of Autobiographical Versus
Imagined Stories With Large-Scale Language Models
- URL: http://arxiv.org/abs/2201.02662v1
- Date: Fri, 7 Jan 2022 20:10:47 GMT
- Title: Computational Lens on Cognition: Study Of Autobiographical Versus
Imagined Stories With Large-Scale Language Models
- Authors: Maarten Sap, Anna Jafarpour, Yejin Choi, Noah A. Smith, James W.
Pennebaker, and Eric Horvitz
- Abstract summary: We study differences in the narrative flow of events in autobiographical versus imagined stories using GPT-3.
We found that imagined stories have higher sequentiality than autobiographical stories.
In comparison to imagined stories, autobiographical stories contain more concrete words and words related to the first person.
- Score: 95.88620740809004
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Lifelong experiences and learned knowledge lead to shared expectations about
how common situations tend to unfold. Such knowledge enables people to
interpret story narratives and identify salient events effortlessly. We study
differences in the narrative flow of events in autobiographical versus imagined
stories using GPT-3, one of the largest neural language models created to date.
The diary-like stories were written by crowdworkers about either a recently
experienced event or an imagined event on the same topic. To analyze the
narrative flow of events of these stories, we measured sentence
*sequentiality*, which compares the probability of a sentence with and without
its preceding story context. We found that imagined stories have higher
sequentiality than autobiographical stories, and that the sequentiality of
autobiographical stories is higher when they are retold than when freshly
recalled. Through an annotation of events in story sentences, we found that the
story types contain similar proportions of major salient events, but that the
autobiographical stories are denser in factual minor events. Furthermore, in
comparison to imagined stories, autobiographical stories contain more concrete
words and words related to the first person, cognitive processes, time, space,
numbers, social words, and core drives and needs. Our findings highlight the
opportunity to investigate memory and cognition with large-scale statistical
language models.
Related papers
- Generating Visual Stories with Grounded and Coreferent Characters [63.07511918366848]
We present the first model capable of predicting visual stories with consistently grounded and coreferent character mentions.
Our model is finetuned on a new dataset which we build on top of the widely used VIST benchmark.
We also propose new evaluation metrics to measure the richness of characters and coreference in stories.
arXiv Detail & Related papers (2024-09-20T14:56:33Z) - Are Large Language Models Capable of Generating Human-Level Narratives? [114.34140090869175]
This paper investigates the capability of LLMs in storytelling, focusing on narrative development and plot progression.
We introduce a novel computational framework to analyze narratives through three discourse-level aspects.
We show that explicit integration of discourse features can enhance storytelling, as is demonstrated by over 40% improvement in neural storytelling.
arXiv Detail & Related papers (2024-07-18T08:02:49Z) - The GPT-WritingPrompts Dataset: A Comparative Analysis of Character Portrayal in Short Stories [17.184517720465404]
We quantify and compare the emotional and descriptive features of storytelling from both generative processes, human and machine, along a set of six dimensions.
We find that generated stories differ significantly from human stories along all six dimensions, and that human and machine generations display similar biases when grouped according to the narrative point-of-view and gender of the main protagonist.
arXiv Detail & Related papers (2024-06-24T16:24:18Z) - Lost in Recursion: Mining Rich Event Semantics in Knowledge Graphs [2.657233098224094]
We show how narratives concerning complex events can be constructed and utilized.
We provide an algorithm that mines such narratives from texts to account for different perspectives on complex events.
arXiv Detail & Related papers (2024-04-25T08:33:08Z) - Neural Story Planning [8.600049807193413]
We present an approach to story plot generation that unifies causal planning with neural language models.
Our system infers the preconditions for events in the story and then events that will cause those conditions to become true.
Results indicate that our proposed method produces more coherent plotlines than several strong baselines.
arXiv Detail & Related papers (2022-12-16T21:29:41Z) - Paragraph-level Commonsense Transformers with Recurrent Memory [77.4133779538797]
We train a discourse-aware model that incorporates paragraph-level information to generate coherent commonsense inferences from narratives.
Our results show that PARA-COMET outperforms the sentence-level baselines, particularly in generating inferences that are both coherent and novel.
arXiv Detail & Related papers (2020-10-04T05:24:12Z) - Hide-and-Tell: Learning to Bridge Photo Streams for Visual Storytelling [86.42719129731907]
We propose to explicitly learn to imagine a storyline that bridges the visual gap.
We train the network to produce a full plausible story even with missing photo(s)
In experiments, we show that our scheme of hide-and-tell, and the network design are indeed effective at storytelling.
arXiv Detail & Related papers (2020-02-03T14:22:18Z) - A Knowledge-Enhanced Pretraining Model for Commonsense Story Generation [98.25464306634758]
We propose to utilize commonsense knowledge from external knowledge bases to generate reasonable stories.
We employ multi-task learning which combines a discriminative objective to distinguish true and fake stories.
Our model can generate more reasonable stories than state-of-the-art baselines, particularly in terms of logic and global coherence.
arXiv Detail & Related papers (2020-01-15T05:42:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.