Membox: Weaving Topic Continuity into Long-Range Memory for LLM Agents
- URL: http://arxiv.org/abs/2601.03785v1
- Date: Wed, 07 Jan 2026 10:36:29 GMT
- Title: Membox: Weaving Topic Continuity into Long-Range Memory for LLM Agents
- Authors: Dehao Tao, Guoliang Ma, Yongfeng Huang, Minghu Jiang,
- Abstract summary: We introduce membox, a hierarchical memory architecture centered on a Topic Loom.<n>Membox monitors dialogue in a sliding-window fashion, grouping consecutive same-topic turns into coherent "memory boxes" at storage time.<n>Experiments on LoCoMo demonstrate that Membox achieves up to 68% F1 improvement on temporal reasoning tasks.
- Score: 14.666607208502185
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Human-agent dialogues often exhibit topic continuity-a stable thematic frame that evolves through temporally adjacent exchanges-yet most large language model (LLM) agent memory systems fail to preserve it. Existing designs follow a fragmentation-compensation paradigm: they first break dialogue streams into isolated utterances for storage, then attempt to restore coherence via embedding-based retrieval. This process irreversibly damages narrative and causal flow, while biasing retrieval towards lexical similarity. We introduce membox, a hierarchical memory architecture centered on a Topic Loom that continuously monitors dialogue in a sliding-window fashion, grouping consecutive same-topic turns into coherent "memory boxes" at storage time. Sealed boxes are then linked by a Trace Weaver into long-range event-timeline traces, recovering macro-topic recurrences across discontinuities. Experiments on LoCoMo demonstrate that Membox achieves up to 68% F1 improvement on temporal reasoning tasks, outperforming competitive baselines (e.g., Mem0, A-MEM). Notably, Membox attains these gains while using only a fraction of the context tokens required by existing methods, highlighting a superior balance between efficiency and effectiveness. By explicitly modeling topic continuity, Membox offers a cognitively motivated mechanism for enhancing both coherence and efficiency in LLM agents.
Related papers
- From Verbatim to Gist: Distilling Pyramidal Multimodal Memory via Semantic Information Bottleneck for Long-Horizon Video Agents [78.30630000529133]
We propose MM-Mem, a pyramidal multimodal memory architecture grounded in Fuzzy-Trace Theory.<n> MM-Mem memory structures hierarchically into a Sensory Buffer, Episodic Stream, and Symbolic.<n>Experiments confirm the effectiveness of MM-Mem on both offline and streaming tasks.
arXiv Detail & Related papers (2026-03-02T05:12:45Z) - TraceMem: Weaving Narrative Memory Schemata from User Conversational Traces [9.654990538033362]
Sustaining long-term interactions remains a bottleneck for Large Language Models.<n>We propose TraceMem, a framework that weaves structured, narrative memory schemata from user conversational traces.<n>TraceMem achieves state-of-the-art performance with a brain-inspired architecture.
arXiv Detail & Related papers (2026-02-10T12:14:58Z) - AMA: Adaptive Memory via Multi-Agent Collaboration [54.490349689939166]
We propose Adaptive Memory via Multi-Agent Collaboration (AMA), a novel framework that leverages coordinated agents to manage memory across multiple granularities.<n>AMA significantly outperforms state-of-the-art baselines while reducing token consumption by approximately 80% compared to full-context methods.
arXiv Detail & Related papers (2026-01-28T08:09:49Z) - MemRec: Collaborative Memory-Augmented Agentic Recommender System [57.548438733740504]
We propose MemRec, a framework that architecturally decouples reasoning from memory management.<n>MemRec introduces a dedicated LM_Mem to manage a dynamic collaborative memory graph.<n>It achieves state-of-the-art performance on four benchmarks.
arXiv Detail & Related papers (2026-01-13T18:51:16Z) - Beyond Dialogue Time: Temporal Semantic Memory for Personalized LLM Agents [68.84161689205779]
Temporal Semantic Memory (TSM) is a memory framework that models semantic time for point-wise memory.<n>TSM consistently outperforms existing methods and achieves up to 12.2% absolute improvement in accuracy.
arXiv Detail & Related papers (2026-01-12T12:24:44Z) - Amory: Building Coherent Narrative-Driven Agent Memory through Agentic Reasoning [14.368376032599437]
Amory is a working memory framework that actively constructs structured memory representations during offline time.<n>Amory organizes conversational fragments into episodic narratives, consolidates memories with momentum, and semanticizes peripheral facts into semantic memory.<n>Amory achieves considerable improvements over previous state-of-the-art, with performance comparable to full context reasoning while reducing response time by 50%.
arXiv Detail & Related papers (2026-01-09T19:51:11Z) - FlashMem: Distilling Intrinsic Latent Memory via Computation Reuse [4.210760734549566]
FlashMem is a framework that distills intrinsic memory directly from transient reasoning states via computation reuse.<n>Experiments demonstrate that FlashMem matches the performance of heavy baselines while reducing inference latency by 5 times.
arXiv Detail & Related papers (2026-01-09T03:27:43Z) - EverMemOS: A Self-Organizing Memory Operating System for Structured Long-Horizon Reasoning [42.339841548168565]
Large Language Models (LLMs) are increasingly deployed as long-term interactive agents, yet their limited context windows make it difficult to sustain coherent behavior over extended interactions.<n>We introduce EverMemOS, a self-organizing memory operating system that implements an engram-inspired lifecycle for computational memory.<n>EverMemOS achieves state-of-the-art performance on memory-augmented reasoning tasks.
arXiv Detail & Related papers (2026-01-05T14:39:43Z) - CogMem: A Cognitive Memory Architecture for Sustained Multi-Turn Reasoning in Large Language Models [21.427373172124167]
Large language models (LLMs) excel at single-turn reasoning but often lose accuracy and coherence over extended, multi-turn interactions.<n>We introduce CogMem, a memory-augmented LLM architecture that supports sustained iterative reasoning through structured, persistent memory.<n> Experiments on TurnBench show that this layered design mitigates reasoning failures, controls context growth, and improves consistency across extended reasoning chains.
arXiv Detail & Related papers (2025-12-16T06:01:08Z) - MemVerse: Multimodal Memory for Lifelong Learning Agents [35.218549149012844]
We introduce MemVerse, a model-agnostic, plug-and-play memory framework.<n>MemVerse bridges fast parametric recall with hierarchical retrieval-based memory.<n>It enables scalable and adaptive multimodal intelligence.
arXiv Detail & Related papers (2025-12-03T10:06:14Z) - Agentic Learner with Grow-and-Refine Multimodal Semantic Memory [50.81667005063605]
ViLoMem is a dual-stream memory framework that constructs compact, schema-based memory.<n>It encodes visual distraction patterns and logical reasoning errors, enabling MLLMs to learn from their successful and failed experiences.
arXiv Detail & Related papers (2025-11-26T18:55:08Z) - Memo: Training Memory-Efficient Embodied Agents with Reinforcement Learning [53.72709564555407]
Memo is a transformer-based architecture and training recipe for reinforcement learning.<n>It incorporates the creation and retrieval of memory by interleaving periodic summarization tokens with the inputs of a model during training.<n>We demonstrate Memo's effectiveness on a gridworld meta-RL benchmark and a multi-object navigation task in photo-realistic indoor settings.
arXiv Detail & Related papers (2025-10-22T16:24:47Z) - CAM: A Constructivist View of Agentic Memory for LLM-Based Reading Comprehension [55.29309306566238]
Current Large Language Models (LLMs) are confronted with overwhelming information volume when comprehending long-form documents.<n>This challenge raises the imperative of a cohesive memory module, which can elevate vanilla LLMs into autonomous reading agents.<n>We draw inspiration from Jean Piaget's Constructivist Theory, illuminating three traits of the agentic memory -- structured schemata, flexible assimilation, and dynamic accommodation.
arXiv Detail & Related papers (2025-10-07T02:16:30Z) - From Single to Multi-Granularity: Toward Long-Term Memory Association and Selection of Conversational Agents [79.87304940020256]
Large Language Models (LLMs) have been widely adopted in conversational agents.<n>MemGAS is a framework that enhances memory consolidation by constructing multi-granularity association, adaptive selection, and retrieval.<n> Experiments on four long-term memory benchmarks demonstrate that MemGAS outperforms state-of-the-art methods on both question answer and retrieval tasks.
arXiv Detail & Related papers (2025-05-26T06:13:07Z) - In Prospect and Retrospect: Reflective Memory Management for Long-term Personalized Dialogue Agents [70.12342024019044]
Large Language Models (LLMs) have made significant progress in open-ended dialogue, yet their inability to retain and retrieve relevant information limits their effectiveness.<n>We propose Reflective Memory Management (RMM), a novel mechanism for long-term dialogue agents, integrating forward- and backward-looking reflections.<n>RMM shows more than 10% accuracy improvement over the baseline without memory management on the LongMemEval dataset.
arXiv Detail & Related papers (2025-03-11T04:15:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.