Beyond Dialogue Time: Temporal Semantic Memory for Personalized LLM Agents
- URL: http://arxiv.org/abs/2601.07468v1
- Date: Mon, 12 Jan 2026 12:24:44 GMT
- Title: Beyond Dialogue Time: Temporal Semantic Memory for Personalized LLM Agents
- Authors: Miao Su, Yucan Guo, Zhongni Hou, Long Bai, Zixuan Li, Yufei Zhang, Guojun Yin, Wei Lin, Xiaolong Jin, Jiafeng Guo, Xueqi Cheng,
- Abstract summary: Temporal Semantic Memory (TSM) is a memory framework that models semantic time for point-wise memory.<n>TSM consistently outperforms existing methods and achieves up to 12.2% absolute improvement in accuracy.
- Score: 68.84161689205779
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Memory enables Large Language Model (LLM) agents to perceive, store, and use information from past dialogues, which is essential for personalization. However, existing methods fail to properly model the temporal dimension of memory in two aspects: 1) Temporal inaccuracy: memories are organized by dialogue time rather than their actual occurrence time; 2) Temporal fragmentation: existing methods focus on point-wise memory, losing durative information that captures persistent states and evolving patterns. To address these limitations, we propose Temporal Semantic Memory (TSM), a memory framework that models semantic time for point-wise memory and supports the construction and utilization of durative memory. During memory construction, it first builds a semantic timeline rather than a dialogue one. Then, it consolidates temporally continuous and semantically related information into a durative memory. During memory utilization, it incorporates the query's temporal intent on the semantic timeline, enabling the retrieval of temporally appropriate durative memories and providing time-valid, duration-consistent context to support response generation. Experiments on LongMemEval and LoCoMo show that TSM consistently outperforms existing methods and achieves up to 12.2% absolute improvement in accuracy, demonstrating the effectiveness of the proposed method.
Related papers
- Beyond the Context Window: A Cost-Performance Analysis of Fact-Based Memory vs. Long-Context LLMs for Persistent Agents [0.0]
Persistent AI systems face a choice between passing full conversation histories to a long-context large language model (LLM) and maintaining a dedicated memory system that extracts and retrieves structured facts.<n>We compare a fact-based memory system built on the Mem0 framework against long-context LLM inference on three memory-centric benchmarks.
arXiv Detail & Related papers (2026-03-05T05:01:30Z) - MemoryArena: Benchmarking Agent Memory in Interdependent Multi-Session Agentic Tasks [55.145729491377374]
Existing evaluations of agents with memory typically assess memorization and action in isolation.<n>We introduce MemoryArena, a unified evaluation gym for benchmarking agent memory in multi-session Memory-Agent-Environment loops.<n> MemoryArena supports evaluation across web navigation, preference-constrained planning, progressive information search, and sequential formal reasoning.
arXiv Detail & Related papers (2026-02-18T09:49:14Z) - EverMemBench: Benchmarking Long-Term Interactive Memory in Large Language Models [16.865998112859604]
We introduce EverMemBench, a benchmark featuring multi-party, multi-group conversations spanning over 1 million tokens.<n>EverMemBench evaluates memory systems across three dimensions through 1,000+ QA pairs.
arXiv Detail & Related papers (2026-02-01T16:13:08Z) - Amory: Building Coherent Narrative-Driven Agent Memory through Agentic Reasoning [14.368376032599437]
Amory is a working memory framework that actively constructs structured memory representations during offline time.<n>Amory organizes conversational fragments into episodic narratives, consolidates memories with momentum, and semanticizes peripheral facts into semantic memory.<n>Amory achieves considerable improvements over previous state-of-the-art, with performance comparable to full context reasoning while reducing response time by 50%.
arXiv Detail & Related papers (2026-01-09T19:51:11Z) - Mem-Gallery: Benchmarking Multimodal Long-Term Conversational Memory for MLLM Agents [76.76004970226485]
Long-term memory is a critical capability for multimodal large language model (MLLM) agents.<n>Mem-Gallery is a new benchmark for evaluating multimodal long-term conversational memory in MLLM agents.
arXiv Detail & Related papers (2026-01-07T02:03:13Z) - Memory-T1: Reinforcement Learning for Temporal Reasoning in Multi-session Agents [80.33280979339123]
We introduce Memory-T1, a framework that learns a time-aware memory selection policy using reinforcement learning (RL)<n>On the Time-Dialog benchmark, Memory-T1 boosts a 7B model to an overall score of 67.0%, establishing a new state-of-the-art performance for open-source models.
arXiv Detail & Related papers (2025-12-23T06:37:29Z) - WorldMM: Dynamic Multimodal Memory Agent for Long Video Reasoning [66.24870234484668]
We introduce WorldMM, a novel multimodal memory agent that constructs and retrieves from multiple complementary memories.<n>WorldMM significantly outperforms existing baselines across five long video question-answering benchmarks.
arXiv Detail & Related papers (2025-12-02T05:14:52Z) - Evaluating Long-Term Memory for Long-Context Question Answering [100.1267054069757]
We present a systematic evaluation of memory-augmented methods using LoCoMo, a benchmark of synthetic long-context dialogues annotated for question-answering tasks.<n>Our findings show that memory-augmented approaches reduce token usage by over 90% while maintaining competitive accuracy.
arXiv Detail & Related papers (2025-10-27T18:03:50Z) - Multiple Memory Systems for Enhancing the Long-term Memory of Agent [9.43633399280987]
Existing methods, such as MemoryBank and A-MEM, have poor quality of stored memory content.<n>We have designed a multiple memory system inspired by cognitive psychology theory.
arXiv Detail & Related papers (2025-08-21T06:29:42Z) - Ever-Evolving Memory by Blending and Refining the Past [30.63352929849842]
CREEM is a novel memory system for long-term conversation.
It seamlessly connects past and present information, while also possessing the ability to forget obstructive information.
arXiv Detail & Related papers (2024-03-03T08:12:59Z) - LaMemo: Language Modeling with Look-Ahead Memory [50.6248714811912]
We propose Look-Ahead Memory (LaMemo) that enhances the recurrence memory by incrementally attending to the right-side tokens.
LaMemo embraces bi-directional attention and segment recurrence with an additional overhead only linearly proportional to the memory length.
Experiments on widely used language modeling benchmarks demonstrate its superiority over the baselines equipped with different types of memory.
arXiv Detail & Related papers (2022-04-15T06:11:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.