Random Tree Model of Meaningful Memory
- URL: http://arxiv.org/abs/2412.01806v3
- Date: Sun, 23 Feb 2025 19:25:11 GMT
- Title: Random Tree Model of Meaningful Memory
- Authors: Weishun Zhong, Tankut Can, Antonis Georgiou, Ilya Shnayderman, Mikhail Katkov, Misha Tsodyks,
- Abstract summary: We introduce a statistical ensemble of random trees to represent narratives as hierarchies of key points, where each node is a compressed representation of its descendant leaves.<n>We find that average recall length increases sublinearly with narrative length, and that individuals summarize increasingly longer narrative segments in each recall sentence.
- Score: 2.412688778659678
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traditional studies of memory for meaningful narratives focus on specific stories and their semantic structures but do not address common quantitative features of recall across different narratives. We introduce a statistical ensemble of random trees to represent narratives as hierarchies of key points, where each node is a compressed representation of its descendant leaves, which are the original narrative segments. Recall is modeled as constrained by working memory capacity from this hierarchical structure. Our analytical solution aligns with observations from large-scale narrative recall experiments. Specifically, our model explains that (1) average recall length increases sublinearly with narrative length, and (2) individuals summarize increasingly longer narrative segments in each recall sentence. Additionally, the theory predicts that for sufficiently long narratives, a universal, scale-invariant limit emerges, where the fraction of a narrative summarized by a single recall sentence follows a distribution independent of narrative length.
Related papers
- Fine-Grained Modeling of Narrative Context: A Coherence Perspective via Retrospective Questions [48.18584733906447]
This work introduces an original and practical paradigm for narrative comprehension, stemming from the characteristics that individual passages within narratives tend to be more cohesively related than isolated.
We propose a fine-grained modeling of narrative context, by formulating a graph dubbed NarCo, which explicitly depicts task-agnostic coherence dependencies.
arXiv Detail & Related papers (2024-02-21T06:14:04Z) - Large-scale study of human memory for meaningful narratives [0.0]
We develop a pipeline that uses large language models (LLMs) to design naturalistic narrative stimuli for large-scale recall and recognition memory experiments.
We performed online memory experiments with a large number of participants and collected recognition and recall data for narratives of different sizes.
arXiv Detail & Related papers (2023-11-08T15:11:57Z) - NarraSum: A Large-Scale Dataset for Abstractive Narrative Summarization [26.80378373420446]
NarraSum is a large-scale narrative summarization dataset.
It contains 122K narrative documents, which are collected from plot descriptions of movies and TV episodes with diverse genres, and their corresponding abstractive summaries.
Experiments show that there is a large performance gap between humans and the state-of-the-art summarization models on NarraSum.
arXiv Detail & Related papers (2022-12-02T22:51:51Z) - A Focused Study on Sequence Length for Dialogue Summarization [68.73335643440957]
We analyze the length differences between existing models' outputs and the corresponding human references.
We identify salient features for summary length prediction by comparing different model settings.
Third, we experiment with a length-aware summarizer and show notable improvement on existing models if summary length can be well incorporated.
arXiv Detail & Related papers (2022-09-24T02:49:48Z) - Computational Lens on Cognition: Study Of Autobiographical Versus
Imagined Stories With Large-Scale Language Models [95.88620740809004]
We study differences in the narrative flow of events in autobiographical versus imagined stories using GPT-3.
We found that imagined stories have higher sequentiality than autobiographical stories.
In comparison to imagined stories, autobiographical stories contain more concrete words and words related to the first person.
arXiv Detail & Related papers (2022-01-07T20:10:47Z) - Paragraph-level Commonsense Transformers with Recurrent Memory [77.4133779538797]
We train a discourse-aware model that incorporates paragraph-level information to generate coherent commonsense inferences from narratives.
Our results show that PARA-COMET outperforms the sentence-level baselines, particularly in generating inferences that are both coherent and novel.
arXiv Detail & Related papers (2020-10-04T05:24:12Z) - Screenplay Summarization Using Latent Narrative Structure [78.45316339164133]
We propose to explicitly incorporate the underlying structure of narratives into general unsupervised and supervised extractive summarization models.
We formalize narrative structure in terms of key narrative events (turning points) and treat it as latent in order to summarize screenplays.
Experimental results on the CSI corpus of TV screenplays, which we augment with scene-level summarization labels, show that latent turning points correlate with important aspects of a CSI episode.
arXiv Detail & Related papers (2020-04-27T11:54:19Z) - The Shmoop Corpus: A Dataset of Stories with Loosely Aligned Summaries [72.48439126769627]
We introduce the Shmoop Corpus: a dataset of 231 stories paired with detailed multi-paragraph summaries for each individual chapter.
From the corpus, we construct a set of common NLP tasks, including Cloze-form question answering and a simplified form of abstractive summarization.
We believe that the unique structure of this corpus provides an important foothold towards making machine story comprehension more approachable.
arXiv Detail & Related papers (2019-12-30T21:03:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.