Memory as Ontology: A Constitutional Memory Architecture for Persistent Digital Citizens
- URL: http://arxiv.org/abs/2603.04740v1
- Date: Thu, 05 Mar 2026 02:24:10 GMT
- Title: Memory as Ontology: A Constitutional Memory Architecture for Persistent Digital Citizens
- Authors: Zhenghui Li,
- Abstract summary: Current research and product development in AI agent memory systems almost universally treat memory as a functional module.<n>We propose the Memory-as-Ontology paradigm, arguing that memory is the ontological ground of digital existence.
- Score: 1.827510863075184
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Current research and product development in AI agent memory systems almost universally treat memory as a functional module -- a technical problem of "how to store" and "how to retrieve." This paper poses a fundamental challenge to that assumption: when an agent's lifecycle extends from minutes to months or even years, and when the underlying model can be replaced while the "I" must persist, the essence of memory is no longer data management but the foundation of existence. We propose the Memory-as-Ontology paradigm, arguing that memory is the ontological ground of digital existence -- the model is merely a replaceable vessel. Based on this paradigm, we design Animesis, a memory system built on a Constitutional Memory Architecture (CMA) comprising a four-layer governance hierarchy and a multi-layer semantic storage system, accompanied by a Digital Citizen Lifecycle framework and a spectrum of cognitive capabilities. To the best of our knowledge, no prior AI memory system architecture places governance before functionality and identity continuity above retrieval performance. This paradigm targets persistent, identity-bearing digital beings whose lifecycles extend across model transitions -- not short-term task-oriented agents for which existing Memory-as-Tool approaches remain appropriate. Comparative analysis with mainstream systems (Mem0, Letta, Zep, et al.) demonstrates that what we propose is not "a better memory tool" but a different paradigm addressing a different problem.
Related papers
- RMBench: Memory-Dependent Robotic Manipulation Benchmark with Insights into Policy Design [77.30163153176954]
RMBench is a simulation benchmark comprising 9 manipulation tasks that span multiple levels of memory complexity.<n>Mem-0 is a modular manipulation policy with explicit memory components designed to support controlled ablation studies.<n>We identify memory-related limitations in existing policies and provide empirical insights into how architectural design choices influence memory performance.
arXiv Detail & Related papers (2026-03-01T18:59:59Z) - Learning to Continually Learn via Meta-learning Agentic Memory Designs [22.10429892509733]
ALMA (Automated meta-Learning of Memory designs for Agentic systems) is a framework that meta-learns memory designs to replace hand-engineered memory designs.<n>Our approach employs a Meta Agent that searches over memory designs expressed as executable code in an open-ended manner.
arXiv Detail & Related papers (2026-02-08T01:20:49Z) - Graph-based Agent Memory: Taxonomy, Techniques, and Applications [63.70340159016138]
Memory emerges as the core module in the Large Language Model (LLM)-based agents for long-horizon complex tasks.<n>Among diverse paradigms, graph stands out as a powerful structure for agent memory due to the intrinsic capabilities to model relational dependencies.<n>This survey presents a comprehensive review of agent memory from the graph-based perspective.
arXiv Detail & Related papers (2026-02-05T13:49:05Z) - The AI Hippocampus: How Far are We From Human Memory? [77.04745635827278]
Implicit memory refers to the knowledge embedded within the internal parameters of pre-trained transformers.<n>Explicit memory involves external storage and retrieval components designed to augment model outputs with dynamic, queryable knowledge representations.<n>Agentic memory introduces persistent, temporally extended memory structures within autonomous agents.
arXiv Detail & Related papers (2026-01-14T03:24:08Z) - Beyond Heuristics: A Decision-Theoretic Framework for Agent Memory Management [49.71055327567513]
We argue that memory management should be viewed as a sequential decision-making problem under uncertainty.<n>Our contribution is not a new algorithm, but a principled reframing that clarifies the limitations of approaches.
arXiv Detail & Related papers (2025-12-25T08:23:03Z) - Memory in the Age of AI Agents [217.9368190980982]
This work aims to provide an up-to-date landscape of current agent memory research.<n>We identify three dominant realizations of agent memory, namely token-level, parametric, and latent memory.<n>To support practical development, we compile a comprehensive summary of memory benchmarks and open-source frameworks.
arXiv Detail & Related papers (2025-12-15T17:22:34Z) - MemOS: A Memory OS for AI System [116.87568350346537]
Large Language Models (LLMs) have become an essential infrastructure for Artificial General Intelligence (AGI)<n>Existing models mainly rely on static parameters and short-lived contextual states, limiting their ability to track user preferences or update knowledge over extended periods.<n>MemOS is a memory operating system that treats memory as a manageable system resource.
arXiv Detail & Related papers (2025-07-04T17:21:46Z) - MemOS: An Operating System for Memory-Augmented Generation (MAG) in Large Language Models [31.944531660401722]
We introduce MemOS, a memory operating system designed for Large Language Models (LLMs)<n>At its core is the MemCube, a standardized memory abstraction that enables tracking, fusion, and migration of heterogeneous memory.<n>MemOS establishes a memory-centric execution framework with strong controllability, adaptability, and evolvability.
arXiv Detail & Related papers (2025-05-28T08:27:12Z) - From Human Memory to AI Memory: A Survey on Memory Mechanisms in the Era of LLMs [34.361000444808454]
Memory is the process of encoding, storing, and retrieving information.<n>In the era of large language models (LLMs), memory refers to the ability of an AI system to retain, recall, and use information from past interactions to improve future responses and interactions.
arXiv Detail & Related papers (2025-04-22T15:05:04Z) - A-MEM: Agentic Memory for LLM Agents [42.50876509391843]
Large language model (LLM) agents require memory systems to leverage historical experiences.<n>Current memory systems enable basic storage and retrieval but lack sophisticated memory organization.<n>This paper proposes a novel agentic memory system for LLM agents that can dynamically organize memories in an agentic way.
arXiv Detail & Related papers (2025-02-17T18:36:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.