TraceMem: Weaving Narrative Memory Schemata from User Conversational Traces
- URL: http://arxiv.org/abs/2602.09712v1
- Date: Tue, 10 Feb 2026 12:14:58 GMT
- Title: TraceMem: Weaving Narrative Memory Schemata from User Conversational Traces
- Authors: Yiming Shu, Pei Liu, Tiange Zhang, Ruiyang Gao, Jun Ma, Chen Sun,
- Abstract summary: Sustaining long-term interactions remains a bottleneck for Large Language Models.<n>We propose TraceMem, a framework that weaves structured, narrative memory schemata from user conversational traces.<n>TraceMem achieves state-of-the-art performance with a brain-inspired architecture.
- Score: 9.654990538033362
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sustaining long-term interactions remains a bottleneck for Large Language Models (LLMs), as their limited context windows struggle to manage dialogue histories that extend over time. Existing memory systems often treat interactions as disjointed snippets, failing to capture the underlying narrative coherence of the dialogue stream. We propose TraceMem, a cognitively-inspired framework that weaves structured, narrative memory schemata from user conversational traces through a three-stage pipeline: (1) Short-term Memory Processing, which employs a deductive topic segmentation approach to demarcate episode boundaries and extract semantic representation; (2) Synaptic Memory Consolidation, a process that summarizes episodes into episodic memories before distilling them alongside semantics into user-specific traces; and (3) Systems Memory Consolidation, which utilizes two-stage hierarchical clustering to organize these traces into coherent, time-evolving narrative threads under unifying themes. These threads are encapsulated into structured user memory cards, forming narrative memory schemata. For memory utilization, we provide an agentic search mechanism to enhance reasoning process. Evaluation on the LoCoMo benchmark shows that TraceMem achieves state-of-the-art performance with a brain-inspired architecture. Analysis shows that by constructing coherent narratives, it surpasses baselines in multi-hop and temporal reasoning, underscoring its essential role in deep narrative comprehension. Additionally, we provide an open discussion on memory systems, offering our perspectives and future outlook on the field. Our code implementation is available at: https://github.com/YimingShu-teay/TraceMem
Related papers
- The AI Hippocampus: How Far are We From Human Memory? [77.04745635827278]
Implicit memory refers to the knowledge embedded within the internal parameters of pre-trained transformers.<n>Explicit memory involves external storage and retrieval components designed to augment model outputs with dynamic, queryable knowledge representations.<n>Agentic memory introduces persistent, temporally extended memory structures within autonomous agents.
arXiv Detail & Related papers (2026-01-14T03:24:08Z) - ES-Mem: Event Segmentation-Based Memory for Long-Term Dialogue Agents [25.10969436399974]
ES-Mem is a framework that partitions long-term interactions into semantically coherent events with distinct boundaries.<n>We show that ES-Mem yields consistent performance gains over baseline methods.<n>The proposed event segmentation module exhibits robust applicability on dialogue segmentation datasets.
arXiv Detail & Related papers (2026-01-12T14:33:32Z) - Beyond Dialogue Time: Temporal Semantic Memory for Personalized LLM Agents [68.84161689205779]
Temporal Semantic Memory (TSM) is a memory framework that models semantic time for point-wise memory.<n>TSM consistently outperforms existing methods and achieves up to 12.2% absolute improvement in accuracy.
arXiv Detail & Related papers (2026-01-12T12:24:44Z) - Amory: Building Coherent Narrative-Driven Agent Memory through Agentic Reasoning [14.368376032599437]
Amory is a working memory framework that actively constructs structured memory representations during offline time.<n>Amory organizes conversational fragments into episodic narratives, consolidates memories with momentum, and semanticizes peripheral facts into semantic memory.<n>Amory achieves considerable improvements over previous state-of-the-art, with performance comparable to full context reasoning while reducing response time by 50%.
arXiv Detail & Related papers (2026-01-09T19:51:11Z) - Memory Matters More: Event-Centric Memory as a Logic Map for Agent Searching and Reasoning [55.251697395358285]
Large language models (LLMs) are increasingly deployed as intelligent agents that reason, plan, and interact with their environments.<n>To effectively scale to long-horizon scenarios, a key capability for such agents is a memory mechanism that can retain, organize, and retrieve past experiences.<n>We propose CompassMem, an event-centric memory framework inspired by Event Theory.
arXiv Detail & Related papers (2026-01-08T08:44:07Z) - Membox: Weaving Topic Continuity into Long-Range Memory for LLM Agents [14.666607208502185]
We introduce membox, a hierarchical memory architecture centered on a Topic Loom.<n>Membox monitors dialogue in a sliding-window fashion, grouping consecutive same-topic turns into coherent "memory boxes" at storage time.<n>Experiments on LoCoMo demonstrate that Membox achieves up to 68% F1 improvement on temporal reasoning tasks.
arXiv Detail & Related papers (2026-01-07T10:36:29Z) - Mem-Gallery: Benchmarking Multimodal Long-Term Conversational Memory for MLLM Agents [76.76004970226485]
Long-term memory is a critical capability for multimodal large language model (MLLM) agents.<n>Mem-Gallery is a new benchmark for evaluating multimodal long-term conversational memory in MLLM agents.
arXiv Detail & Related papers (2026-01-07T02:03:13Z) - EverMemOS: A Self-Organizing Memory Operating System for Structured Long-Horizon Reasoning [42.339841548168565]
Large Language Models (LLMs) are increasingly deployed as long-term interactive agents, yet their limited context windows make it difficult to sustain coherent behavior over extended interactions.<n>We introduce EverMemOS, a self-organizing memory operating system that implements an engram-inspired lifecycle for computational memory.<n>EverMemOS achieves state-of-the-art performance on memory-augmented reasoning tasks.
arXiv Detail & Related papers (2026-01-05T14:39:43Z) - Evaluating Long-Term Memory for Long-Context Question Answering [100.1267054069757]
We present a systematic evaluation of memory-augmented methods using LoCoMo, a benchmark of synthetic long-context dialogues annotated for question-answering tasks.<n>Our findings show that memory-augmented approaches reduce token usage by over 90% while maintaining competitive accuracy.
arXiv Detail & Related papers (2025-10-27T18:03:50Z) - On Memory Construction and Retrieval for Personalized Conversational Agents [69.46887405020186]
We propose SeCom, a method that constructs the memory bank at segment level by introducing a conversation segmentation model.<n> Experimental results show that SeCom exhibits a significant performance advantage over baselines on long-term conversation benchmarks LOCOMO and Long-MT-Bench+.
arXiv Detail & Related papers (2025-02-08T14:28:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.