Emerging Human-like Strategies for Semantic Memory Foraging in Large Language Models
- URL: http://arxiv.org/abs/2603.01822v1
- Date: Mon, 02 Mar 2026 12:55:51 GMT
- Title: Emerging Human-like Strategies for Semantic Memory Foraging in Large Language Models
- Authors: Eric Lacosse, Mariana Duarte, Peter M. Todd, Daniel C. McNamee,
- Abstract summary: Both humans and Large Language Models (LLMs) store a vast repository of semantic memories.<n>In humans, efficient and strategic access to this memory store is a critical foundation for a variety of cognitive functions.
- Score: 0.8749675983608171
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Both humans and Large Language Models (LLMs) store a vast repository of semantic memories. In humans, efficient and strategic access to this memory store is a critical foundation for a variety of cognitive functions. Such access has long been a focus of psychology and the computational mechanisms behind it are now well characterized. Much of this understanding has been gleaned from a widely-used neuropsychological and cognitive science assessment called the Semantic Fluency Task (SFT), which requires the generation of as many semantically constrained concepts as possible. Our goal is to apply mechanistic interpretability techniques to bring greater rigor to the study of semantic memory foraging in LLMs. To this end, we present preliminary results examining SFT as a case study. A central focus is on convergent and divergent patterns of generative memory search, which in humans play complementary strategic roles in efficient memory foraging. We show that these same behavioral signatures, critical to human performance on the SFT, also emerge as identifiable patterns in LLMs across distinct layers. Potentially, this analysis provides new insights into how LLMs may be adapted into closer cognitive alignment with humans, or alternatively, guided toward productive cognitive \emph{disalignment} to enhance complementary strengths in human-AI interaction.
Related papers
- UniCog: Uncovering Cognitive Abilities of LLMs through Latent Mind Space Analysis [69.50752734049985]
A growing body of research suggests that the cognitive processes of large language models (LLMs) differ fundamentally from those of humans.<n>We propose UniCog, a unified framework that analyzes LLM cognition via a latent mind space.
arXiv Detail & Related papers (2026-01-25T16:19:00Z) - AI Meets Brain: Memory Systems from Cognitive Neuroscience to Autonomous Agents [69.39123054975218]
Memory serves as the pivotal nexus bridging past and future.<n>Recent research on autonomous agents has increasingly focused on designing efficient memory by drawing on cognitive neuroscience.
arXiv Detail & Related papers (2025-12-29T10:01:32Z) - Think Socially via Cognitive Reasoning [94.60442643943696]
We introduce Cognitive Reasoning, a paradigm modeled on human social cognition.<n>CogFlow is a complete framework that instills this capability in LLMs.
arXiv Detail & Related papers (2025-09-26T16:27:29Z) - Visual Large Language Models Exhibit Human-Level Cognitive Flexibility in the Wisconsin Card Sorting Test [5.346677002840565]
This study assesses the cognitive flexibility of state-of-the-art Visual Large Language Models (VLLMs)<n>Our results reveal that VLLMs achieve or surpass human-level set-shifting capabilities under chain-of-thought prompting with text-based inputs.
arXiv Detail & Related papers (2025-05-28T08:40:55Z) - From Tokens to Thoughts: How LLMs and Humans Trade Compression for Meaning [63.25540801694765]
Large Language Models (LLMs) demonstrate striking linguistic abilities, yet whether they achieve this same balance remains unclear.<n>We apply the Information Bottleneck principle to quantitatively compare how LLMs and humans navigate this compression-meaning trade-off.
arXiv Detail & Related papers (2025-05-21T16:29:00Z) - Talking to the brain: Using Large Language Models as Proxies to Model Brain Semantic Representation [6.870138108382051]
We introduce a novel paradigm leveraging multimodal large language models (LLMs) as proxies to extract semantic information from naturalistic images.<n>LLMs-derived representations successfully predict established neural activity patterns measured by fMRI.<n>A brain semantic network constructed from LLM-derived representations identifies meaningful clusters reflecting functional and contextual associations.
arXiv Detail & Related papers (2025-02-26T00:40:28Z) - Brain-like Functional Organization within Large Language Models [58.93629121400745]
The human brain has long inspired the pursuit of artificial intelligence (AI)
Recent neuroimaging studies provide compelling evidence of alignment between the computational representation of artificial neural networks (ANNs) and the neural responses of the human brain to stimuli.
In this study, we bridge this gap by directly coupling sub-groups of artificial neurons with functional brain networks (FBNs)
This framework links the AN sub-groups to FBNs, enabling the delineation of brain-like functional organization within large language models (LLMs)
arXiv Detail & Related papers (2024-10-25T13:15:17Z) - Towards a Psychology of Machines: Large Language Models Predict Human Memory [0.0]
Large language models (LLMs) have shown remarkable abilities in natural language processing.<n>This study explores whether LLMs can predict human memory performance in tasks involving garden-path sentences and contextual information.
arXiv Detail & Related papers (2024-03-08T08:41:14Z) - Empowering Working Memory for Large Language Model Agents [9.83467478231344]
This paper explores the potential of applying cognitive psychology's working memory frameworks to large language models (LLMs)
An innovative model is proposed incorporating a centralized Working Memory Hub and Episodic Buffer access to retain memories across episodes.
This architecture aims to provide greater continuity for nuanced contextual reasoning during intricate tasks and collaborative scenarios.
arXiv Detail & Related papers (2023-12-22T05:59:00Z) - Machine Psychology [54.287802134327485]
We argue that a fruitful direction for research is engaging large language models in behavioral experiments inspired by psychology.
We highlight theoretical perspectives, experimental paradigms, and computational analysis techniques that this approach brings to the table.
It paves the way for a "machine psychology" for generative artificial intelligence (AI) that goes beyond performance benchmarks.
arXiv Detail & Related papers (2023-03-24T13:24:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.