EverMemOS: A Self-Organizing Memory Operating System for Structured Long-Horizon Reasoning
- URL: http://arxiv.org/abs/2601.02163v2
- Date: Fri, 09 Jan 2026 02:23:07 GMT
- Title: EverMemOS: A Self-Organizing Memory Operating System for Structured Long-Horizon Reasoning
- Authors: Chuanrui Hu, Xingze Gao, Zuyi Zhou, Dannong Xu, Yi Bai, Xintong Li, Hui Zhang, Tong Li, Chong Zhang, Lidong Bing, Yafeng Deng,
- Abstract summary: Large Language Models (LLMs) are increasingly deployed as long-term interactive agents, yet their limited context windows make it difficult to sustain coherent behavior over extended interactions.<n>We introduce EverMemOS, a self-organizing memory operating system that implements an engram-inspired lifecycle for computational memory.<n>EverMemOS achieves state-of-the-art performance on memory-augmented reasoning tasks.
- Score: 42.339841548168565
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large Language Models (LLMs) are increasingly deployed as long-term interactive agents, yet their limited context windows make it difficult to sustain coherent behavior over extended interactions. Existing memory systems often store isolated records and retrieve fragments, limiting their ability to consolidate evolving user states and resolve conflicts. We introduce EverMemOS, a self-organizing memory operating system that implements an engram-inspired lifecycle for computational memory. Episodic Trace Formation converts dialogue streams into MemCells that capture episodic traces, atomic facts, and time-bounded Foresight signals. Semantic Consolidation organizes MemCells into thematic MemScenes, distilling stable semantic structures and updating user profiles. Reconstructive Recollection performs MemScene-guided agentic retrieval to compose the necessary and sufficient context for downstream reasoning. Experiments on LoCoMo and LongMemEval show that EverMemOS achieves state-of-the-art performance on memory-augmented reasoning tasks. We further report a profile study on PersonaMem v2 and qualitative case studies illustrating chat-oriented capabilities such as user profiling and Foresight. Code is available at https://github.com/EverMind-AI/EverMemOS.
Related papers
- TraceMem: Weaving Narrative Memory Schemata from User Conversational Traces [9.654990538033362]
Sustaining long-term interactions remains a bottleneck for Large Language Models.<n>We propose TraceMem, a framework that weaves structured, narrative memory schemata from user conversational traces.<n>TraceMem achieves state-of-the-art performance with a brain-inspired architecture.
arXiv Detail & Related papers (2026-02-10T12:14:58Z) - MetaMem: Evolving Meta-Memory for Knowledge Utilization through Self-Reflective Symbolic Optimization [57.17751568928966]
We propose MetaMem, a framework that augments memory systems with a self-evolving meta-memory.<n>During meta-memory optimization, MetaMem iteratively distills transferable knowledge utilization experiences across different tasks.<n>Extensive experiments demonstrate the effectiveness of MetaMem, which significantly outperforms strong baselines by over 3.6%.
arXiv Detail & Related papers (2026-01-27T04:46:23Z) - HiMem: Hierarchical Long-Term Memory for LLM Long-Horizon Agents [3.9396865837159822]
HiMem is a hierarchical long-term memory framework for long-horizon dialogues.<n>It supports memory construction, retrieval, and dynamic updating during sustained interactions.<n>Results show HiMem consistently outperforms representative baselines in accuracy, consistency, and long-term reasoning.
arXiv Detail & Related papers (2026-01-10T01:26:01Z) - EvolMem: A Cognitive-Driven Benchmark for Multi-Session Dialogue Memory [63.84216832544323]
EvolMem is a new benchmark for assessing multi-session memory capabilities of large language models (LLMs) and agent systems.<n>To construct the benchmark, we introduce a hybrid data synthesis framework that consists of topic-initiated generation and narrative-inspired transformations.<n>Extensive evaluation reveals that no LLM consistently outperforms others across all memory dimensions.
arXiv Detail & Related papers (2026-01-07T03:14:42Z) - MemVerse: Multimodal Memory for Lifelong Learning Agents [35.218549149012844]
We introduce MemVerse, a model-agnostic, plug-and-play memory framework.<n>MemVerse bridges fast parametric recall with hierarchical retrieval-based memory.<n>It enables scalable and adaptive multimodal intelligence.
arXiv Detail & Related papers (2025-12-03T10:06:14Z) - Evo-Memory: Benchmarking LLM Agent Test-time Learning with Self-Evolving Memory [89.65731902036669]
Evo-Memory is a streaming benchmark and framework for evaluating self-evolving memory in large language model (LLM) agents.<n>We evaluate over ten representative memory modules and evaluate them across 10 diverse multi-turn goal-oriented and single-turn reasoning and QA datasets.
arXiv Detail & Related papers (2025-11-25T21:08:07Z) - Evaluating Long-Term Memory for Long-Context Question Answering [100.1267054069757]
We present a systematic evaluation of memory-augmented methods using LoCoMo, a benchmark of synthetic long-context dialogues annotated for question-answering tasks.<n>Our findings show that memory-augmented approaches reduce token usage by over 90% while maintaining competitive accuracy.
arXiv Detail & Related papers (2025-10-27T18:03:50Z) - Mem-α: Learning Memory Construction via Reinforcement Learning [20.916677456417464]
Large language model (LLM) agents are constrained by limited context windows.<n>Current memory-augmented agents depend on pre-defined instructions and tools for memory updates.<n>Mem-alpha is a reinforcement learning framework that trains agents to effectively manage complex memory systems.
arXiv Detail & Related papers (2025-09-30T08:02:34Z) - Multiple Memory Systems for Enhancing the Long-term Memory of Agent [9.43633399280987]
Existing methods, such as MemoryBank and A-MEM, have poor quality of stored memory content.<n>We have designed a multiple memory system inspired by cognitive psychology theory.
arXiv Detail & Related papers (2025-08-21T06:29:42Z) - MemOS: A Memory OS for AI System [116.87568350346537]
Large Language Models (LLMs) have become an essential infrastructure for Artificial General Intelligence (AGI)<n>Existing models mainly rely on static parameters and short-lived contextual states, limiting their ability to track user preferences or update knowledge over extended periods.<n>MemOS is a memory operating system that treats memory as a manageable system resource.
arXiv Detail & Related papers (2025-07-04T17:21:46Z) - MemOS: An Operating System for Memory-Augmented Generation (MAG) in Large Language Models [31.944531660401722]
We introduce MemOS, a memory operating system designed for Large Language Models (LLMs)<n>At its core is the MemCube, a standardized memory abstraction that enables tracking, fusion, and migration of heterogeneous memory.<n>MemOS establishes a memory-centric execution framework with strong controllability, adaptability, and evolvability.
arXiv Detail & Related papers (2025-05-28T08:27:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.