PISA: A Pragmatic Psych-Inspired Unified Memory System for Enhanced AI Agency
- URL: http://arxiv.org/abs/2510.15966v1
- Date: Sun, 12 Oct 2025 10:34:35 GMT
- Title: PISA: A Pragmatic Psych-Inspired Unified Memory System for Enhanced AI Agency
- Authors: Shian Jia, Ziyang Huang, Xinbo Wang, Haofei Zhang, Mingli Song,
- Abstract summary: Existing work often lacks adaptability to diverse tasks and overlooks the constructive and task-oriented role of AI agent memory.<n>We propose PISA, a pragmatic, psych-inspired unified memory system that treats memory as a constructive and adaptive process.<n>Our empirical evaluation, conducted on the existing LOCOMO benchmark and our newly proposed AggQA benchmark for data analysis tasks, confirms that PISA sets a new state-of-the-art by significantly enhancing adaptability and long-term knowledge retention.
- Score: 50.712873697511206
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Memory systems are fundamental to AI agents, yet existing work often lacks adaptability to diverse tasks and overlooks the constructive and task-oriented role of AI agent memory. Drawing from Piaget's theory of cognitive development, we propose PISA, a pragmatic, psych-inspired unified memory system that addresses these limitations by treating memory as a constructive and adaptive process. To enable continuous learning and adaptability, PISA introduces a trimodal adaptation mechanism (i.e., schema updation, schema evolution, and schema creation) that preserves coherent organization while supporting flexible memory updates. Building on these schema-grounded structures, we further design a hybrid memory access architecture that seamlessly integrates symbolic reasoning with neural retrieval, significantly improving retrieval accuracy and efficiency. Our empirical evaluation, conducted on the existing LOCOMO benchmark and our newly proposed AggQA benchmark for data analysis tasks, confirms that PISA sets a new state-of-the-art by significantly enhancing adaptability and long-term knowledge retention.
Related papers
- Learning to Continually Learn via Meta-learning Agentic Memory Designs [22.10429892509733]
ALMA (Automated meta-Learning of Memory designs for Agentic systems) is a framework that meta-learns memory designs to replace hand-engineered memory designs.<n>Our approach employs a Meta Agent that searches over memory designs expressed as executable code in an open-ended manner.
arXiv Detail & Related papers (2026-02-08T01:20:49Z) - The AI Hippocampus: How Far are We From Human Memory? [77.04745635827278]
Implicit memory refers to the knowledge embedded within the internal parameters of pre-trained transformers.<n>Explicit memory involves external storage and retrieval components designed to augment model outputs with dynamic, queryable knowledge representations.<n>Agentic memory introduces persistent, temporally extended memory structures within autonomous agents.
arXiv Detail & Related papers (2026-01-14T03:24:08Z) - Improving Multi-step RAG with Hypergraph-based Memory for Long-Context Complex Relational Modeling [83.29209853451697]
Multi-step retrieval-augmented generation (RAG) has become a widely adopted strategy for enhancing large language models (LLMs)<n>We introduce HGMem, a hypergraph-based memory mechanism that extends the concept of memory into a dynamic, expressive structure for complex reasoning and global understanding.<n>In our approach, memory is represented as a hypergraph whose hyperedges correspond to distinct memory units, enabling the progressive formation of higher-order interactions within memory.
arXiv Detail & Related papers (2025-12-30T03:13:10Z) - Memory in the Age of AI Agents [217.9368190980982]
This work aims to provide an up-to-date landscape of current agent memory research.<n>We identify three dominant realizations of agent memory, namely token-level, parametric, and latent memory.<n>To support practical development, we compile a comprehensive summary of memory benchmarks and open-source frameworks.
arXiv Detail & Related papers (2025-12-15T17:22:34Z) - Fundamentals of Building Autonomous LLM Agents [64.39018305018904]
This paper reviews the architecture and implementation methods of agents powered by large language models (LLMs)<n>The research aims to explore patterns to develop "agentic" LLMs that can automate complex tasks and bridge the performance gap with human capabilities.
arXiv Detail & Related papers (2025-10-10T10:32:39Z) - CAM: A Constructivist View of Agentic Memory for LLM-Based Reading Comprehension [55.29309306566238]
Current Large Language Models (LLMs) are confronted with overwhelming information volume when comprehending long-form documents.<n>This challenge raises the imperative of a cohesive memory module, which can elevate vanilla LLMs into autonomous reading agents.<n>We draw inspiration from Jean Piaget's Constructivist Theory, illuminating three traits of the agentic memory -- structured schemata, flexible assimilation, and dynamic accommodation.
arXiv Detail & Related papers (2025-10-07T02:16:30Z) - Memory Management and Contextual Consistency for Long-Running Low-Code Agents [0.0]
This paper proposes a novel hybrid memory system designed specifically for LCNC agents.<n>Inspired by cognitive science, our architecture combines episodic and semantic memory components with a proactive "Intelligent Decay" mechanism.<n>Key innovation is a user-centric visualization interface, aligned with the LCNC paradigm, which allows non-technical users to manage the agent's memory directly.
arXiv Detail & Related papers (2025-09-27T08:01:26Z) - A Scenario-Driven Cognitive Approach to Next-Generation AI Memory [12.798608799338275]
COLMA is a novel framework that integrates cognitive scenarios, memory processes, and storage mechanisms into a cohesive design.<n>It provides a structured foundation for developing AI systems capable of lifelong learning and human-like reasoning.
arXiv Detail & Related papers (2025-09-16T16:43:07Z) - Memory-Augmented Transformers: A Systematic Review from Neuroscience Principles to Enhanced Model Architectures [4.942399246128045]
Memory is fundamental to intelligence, enabling learning, reasoning, and adaptability across biological and artificial systems.<n>Transformers excel at sequence modeling, but face limitations in long-range context retention, continual learning, and knowledge integration.<n>This review presents a unified framework bridging neuroscience principles, including dynamic multi-timescale memory, selective attention, and consolidation.
arXiv Detail & Related papers (2025-08-14T16:48:38Z) - Contextual Memory Intelligence -- A Foundational Paradigm for Human-AI Collaboration and Reflective Generative AI Systems [0.0]
This paper introduces Contextual Memory Intelligence (CMI) as a new paradigm for building intelligent systems.<n> CMI repositions memory as an adaptive infrastructure necessary for longitudinal coherence, explainability, and responsible decision-making.<n>This enhances human-AI collaboration, generative AI design, and the resilience of the institutions.
arXiv Detail & Related papers (2025-05-28T18:59:16Z) - Incorporating Neuro-Inspired Adaptability for Continual Learning in
Artificial Intelligence [59.11038175596807]
Continual learning aims to empower artificial intelligence with strong adaptability to the real world.
Existing advances mainly focus on preserving memory stability to overcome catastrophic forgetting.
We propose a generic approach that appropriately attenuates old memories in parameter distributions to improve learning plasticity.
arXiv Detail & Related papers (2023-08-29T02:43:58Z) - CogNGen: Constructing the Kernel of a Hyperdimensional Predictive
Processing Cognitive Architecture [79.07468367923619]
We present a new cognitive architecture that combines two neurobiologically plausible, computational models.
We aim to develop a cognitive architecture that has the power of modern machine learning techniques.
arXiv Detail & Related papers (2022-03-31T04:44:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.