Using large language models to study human memory for meaningful
narratives
- URL: http://arxiv.org/abs/2311.04742v2
- Date: Tue, 28 Nov 2023 05:25:45 GMT
- Title: Using large language models to study human memory for meaningful
narratives
- Authors: Antonios Georgiou, Tankut Can, Mikhail Katkov, Misha Tsodyks
- Abstract summary: We show that language models can be used as a scientific instrument for studying human memory for meaningful material.
We performed online memory experiments with a large number of participants and collected recognition and recall data for narratives of different lengths.
In order to investigate the role of narrative comprehension in memory, we repeated these experiments using scrambled versions of the presented stories.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: One of the most impressive achievements of the AI revolution is the
development of large language models that can generate meaningful text and
respond to instructions in plain English with no additional training necessary.
Here we show that language models can be used as a scientific instrument for
studying human memory for meaningful material. We developed a pipeline for
designing large scale memory experiments and analyzing the obtained results. We
performed online memory experiments with a large number of participants and
collected recognition and recall data for narratives of different lengths. We
found that both recall and recognition performance scale linearly with
narrative length. Furthermore, in order to investigate the role of narrative
comprehension in memory, we repeated these experiments using scrambled versions
of the presented stories. We found that even though recall performance declined
significantly, recognition remained largely unaffected. Interestingly, recalls
in this condition seem to follow the original narrative order rather than the
scrambled presentation, pointing to a contextual reconstruction of the story in
memory.
Related papers
- EvolvTrip: Enhancing Literary Character Understanding with Temporal Theory-of-Mind Graphs [23.86303464364475]
We introduce EvolvTrip, a perspective-aware temporal knowledge graph that tracks psychological development throughout narratives.<n>Our findings highlight the importance of explicit representation of temporal character mental states in narrative comprehension.
arXiv Detail & Related papers (2025-06-16T16:05:17Z) - Random Tree Model of Meaningful Memory [2.412688778659678]
We introduce a statistical ensemble of random trees to represent narratives as hierarchies of key points, where each node is a compressed representation of its descendant leaves.
We find that average recall length increases sublinearly with narrative length, and that individuals summarize increasingly longer narrative segments in each recall sentence.
arXiv Detail & Related papers (2024-12-02T18:50:27Z) - Generalization v.s. Memorization: Tracing Language Models' Capabilities Back to Pretraining Data [76.90128359866462]
We introduce an extended concept of memorization, distributional memorization, which measures the correlation between the output probabilities and the pretraining data frequency.
This study demonstrates that memorization plays a larger role in simpler, knowledge-intensive tasks, while generalization is the key for harder, reasoning-based tasks.
arXiv Detail & Related papers (2024-07-20T21:24:40Z) - Are Large Language Models Capable of Generating Human-Level Narratives? [114.34140090869175]
This paper investigates the capability of LLMs in storytelling, focusing on narrative development and plot progression.
We introduce a novel computational framework to analyze narratives through three discourse-level aspects.
We show that explicit integration of discourse features can enhance storytelling, as is demonstrated by over 40% improvement in neural storytelling.
arXiv Detail & Related papers (2024-07-18T08:02:49Z) - A Multi-Perspective Analysis of Memorization in Large Language Models [10.276594755936529]
Large Language Models (LLMs) show unprecedented performance in various fields.
LLMs can generate the same content used to train them.
This research comprehensively discussed memorization from various perspectives.
arXiv Detail & Related papers (2024-05-19T15:00:50Z) - In-Memory Learning: A Declarative Learning Framework for Large Language
Models [56.62616975119192]
We propose a novel learning framework that allows agents to align with their environment without relying on human-labeled data.
This entire process transpires within the memory components and is implemented through natural language.
We demonstrate the effectiveness of our framework and provide insights into this problem.
arXiv Detail & Related papers (2024-03-05T08:25:11Z) - ROME: Memorization Insights from Text, Logits and Representation [17.458840481902644]
This paper proposes an innovative approach named ROME that bypasses direct processing of the training data.
Specifically, we select datasets categorized into three distinct types -- context-independent, conventional, and factual.
Our analysis then focuses on disparities between memorized and non-memorized samples by examining the logits and representations of generated texts.
arXiv Detail & Related papers (2024-03-01T13:15:30Z) - Exploring Memorization in Fine-tuned Language Models [53.52403444655213]
We conduct the first comprehensive analysis to explore language models' memorization during fine-tuning across tasks.
Our studies with open-sourced and our own fine-tuned LMs across various tasks indicate that memorization presents a strong disparity among different fine-tuning tasks.
We provide an intuitive explanation of this task disparity via sparse coding theory and unveil a strong correlation between memorization and attention score distribution.
arXiv Detail & Related papers (2023-10-10T15:41:26Z) - How Relevant is Selective Memory Population in Lifelong Language
Learning? [15.9310767099639]
State-of-the-art approaches rely on sparse experience replay as the primary approach to prevent forgetting.
We investigate how relevant the selective memory population is in the lifelong learning process of text classification and question-answering tasks.
arXiv Detail & Related papers (2022-10-03T13:52:54Z) - Computational Lens on Cognition: Study Of Autobiographical Versus
Imagined Stories With Large-Scale Language Models [95.88620740809004]
We study differences in the narrative flow of events in autobiographical versus imagined stories using GPT-3.
We found that imagined stories have higher sequentiality than autobiographical stories.
In comparison to imagined stories, autobiographical stories contain more concrete words and words related to the first person.
arXiv Detail & Related papers (2022-01-07T20:10:47Z) - Paragraph-level Commonsense Transformers with Recurrent Memory [77.4133779538797]
We train a discourse-aware model that incorporates paragraph-level information to generate coherent commonsense inferences from narratives.
Our results show that PARA-COMET outperforms the sentence-level baselines, particularly in generating inferences that are both coherent and novel.
arXiv Detail & Related papers (2020-10-04T05:24:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.