RGMem: Renormalization Group-based Memory Evolution for Language Agent User Profile
- URL: http://arxiv.org/abs/2510.16392v1
- Date: Sat, 18 Oct 2025 08:16:46 GMT
- Title: RGMem: Renormalization Group-based Memory Evolution for Language Agent User Profile
- Authors: Ao Tian, Yunfeng Lu, Xinxin Fan, Changhao Wang, Lanzhi Zhou, Yeyao Zhang, Yanfang Liu,
- Abstract summary: We propose a self-evolving memory framework, inspired by the ideology of classic renormalization group (RG) in physics.<n>This framework enables to organize the dialogue history in multiple scales.<n>The core innovation of our work lies in modeling memory evolution as a multi-scale process of information compression and emergence.
- Score: 8.224917568034572
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Personalized and continuous interactions are the key to enhancing user experience in today's large language model (LLM)-based conversational systems, however, the finite context windows and static parametric memory make it difficult to model the cross-session long-term user states and behavioral consistency. Currently, the existing solutions to this predicament, such as retrieval-augmented generation (RAG) and explicit memory systems, primarily focus on fact-level storage and retrieval, lacking the capability to distill latent preferences and deep traits from the multi-turn dialogues, which limits the long-term and effective user modeling, directly leading to the personalized interactions remaining shallow, and hindering the cross-session continuity. To realize the long-term memory and behavioral consistency for Language Agents in LLM era, we propose a self-evolving memory framework RGMem, inspired by the ideology of classic renormalization group (RG) in physics, this framework enables to organize the dialogue history in multiple scales: it first extracts semantics and user insights from episodic fragments, then through hierarchical coarse-graining and rescaling operations, progressively forms a dynamically-evolved user profile. The core innovation of our work lies in modeling memory evolution as a multi-scale process of information compression and emergence, which accomplishes the high-level and accurate user profiles from noisy and microscopic-level interactions.
Related papers
- ES-MemEval: Benchmarking Conversational Agents on Personalized Long-Term Emotional Support [11.480342895892404]
Large Language Models (LLMs) have shown strong potential as conversational agents.<n>Yet, their effectiveness remains limited by deficiencies in robust long-term memory.<n> ES-MemEval is a benchmark that systematically evaluates five core memory capabilities.<n>EvoEmo is a dataset for personalized long-term emotional support.
arXiv Detail & Related papers (2026-02-02T09:58:26Z) - The AI Hippocampus: How Far are We From Human Memory? [77.04745635827278]
Implicit memory refers to the knowledge embedded within the internal parameters of pre-trained transformers.<n>Explicit memory involves external storage and retrieval components designed to augment model outputs with dynamic, queryable knowledge representations.<n>Agentic memory introduces persistent, temporally extended memory structures within autonomous agents.
arXiv Detail & Related papers (2026-01-14T03:24:08Z) - Controllable Memory Usage: Balancing Anchoring and Innovation in Long-Term Human-Agent Interaction [35.20324450282101]
We show that an agent's reliance on memory can be modeled as an explicit and user-controllable dimension.<n>We propose textbfSteerable textbfMemory Agent, textttSteeM, a framework that allows users to dynamically regulate memory reliance.
arXiv Detail & Related papers (2026-01-08T16:54:30Z) - EvolMem: A Cognitive-Driven Benchmark for Multi-Session Dialogue Memory [63.84216832544323]
EvolMem is a new benchmark for assessing multi-session memory capabilities of large language models (LLMs) and agent systems.<n>To construct the benchmark, we introduce a hybrid data synthesis framework that consists of topic-initiated generation and narrative-inspired transformations.<n>Extensive evaluation reveals that no LLM consistently outperforms others across all memory dimensions.
arXiv Detail & Related papers (2026-01-07T03:14:42Z) - Mem-Gallery: Benchmarking Multimodal Long-Term Conversational Memory for MLLM Agents [76.76004970226485]
Long-term memory is a critical capability for multimodal large language model (MLLM) agents.<n>Mem-Gallery is a new benchmark for evaluating multimodal long-term conversational memory in MLLM agents.
arXiv Detail & Related papers (2026-01-07T02:03:13Z) - Memoria: A Scalable Agentic Memory Framework for Personalized Conversational AI [0.6840655769002751]
Agentic memory is emerging as a key enabler for large language models (LLM)<n>We present Memoria, a modular memory framework that augments LLM-based conversational systems with persistent, interpretable, and context-rich memory.<n>We demonstrate how Memoria enables scalable, personalized conversational artificial intelligence (AI) by bridging the gap between stateless LLM interfaces and agentic memory systems.
arXiv Detail & Related papers (2025-12-14T13:38:06Z) - Evo-Memory: Benchmarking LLM Agent Test-time Learning with Self-Evolving Memory [89.65731902036669]
Evo-Memory is a streaming benchmark and framework for evaluating self-evolving memory in large language model (LLM) agents.<n>We evaluate over ten representative memory modules and evaluate them across 10 diverse multi-turn goal-oriented and single-turn reasoning and QA datasets.
arXiv Detail & Related papers (2025-11-25T21:08:07Z) - Can We Predict the Next Question? A Collaborative Filtering Approach to Modeling User Behavior [16.241726074740082]
Large language models (LLMs) have excelled in language understanding and generation, powering advanced dialogue and recommendation systems.<n>We propose a Collaborative Filtering-enhanced Question Prediction framework to bridge the gap between language modeling and behavioral sequence modeling.
arXiv Detail & Related papers (2025-11-17T04:01:20Z) - Preference-Aware Memory Update for Long-Term LLM Agents [27.776042930733784]
One of the key factors influencing the reasoning capabilities of LLM-based agents is their ability to leverage long-term memory.<n>We propose a Preference-Aware Memory Update Mechanism (PAMU) that enables dynamic and personalized memory refinement.
arXiv Detail & Related papers (2025-10-10T06:49:35Z) - From Single to Multi-Granularity: Toward Long-Term Memory Association and Selection of Conversational Agents [79.87304940020256]
Large Language Models (LLMs) have been widely adopted in conversational agents.<n>MemGAS is a framework that enhances memory consolidation by constructing multi-granularity association, adaptive selection, and retrieval.<n> Experiments on four long-term memory benchmarks demonstrate that MemGAS outperforms state-of-the-art methods on both question answer and retrieval tasks.
arXiv Detail & Related papers (2025-05-26T06:13:07Z) - In Prospect and Retrospect: Reflective Memory Management for Long-term Personalized Dialogue Agents [70.12342024019044]
Large Language Models (LLMs) have made significant progress in open-ended dialogue, yet their inability to retain and retrieve relevant information limits their effectiveness.<n>We propose Reflective Memory Management (RMM), a novel mechanism for long-term dialogue agents, integrating forward- and backward-looking reflections.<n>RMM shows more than 10% accuracy improvement over the baseline without memory management on the LongMemEval dataset.
arXiv Detail & Related papers (2025-03-11T04:15:52Z) - SynapticRAG: Enhancing Temporal Memory Retrieval in Large Language Models through Synaptic Mechanisms [8.787174594966492]
We propose SynapticRAG, which combines temporal association triggers with biologically-inspired synaptic propagation mechanisms.<n>Our approach uses temporal association triggers and synaptic-like stimulus propagation to identify relevant dialogue histories.<n>Experiments on four datasets show that SynapticRAG achieves consistent improvements across multiple metrics up to 14.66% points.
arXiv Detail & Related papers (2024-10-17T13:51:03Z) - Hello Again! LLM-powered Personalized Agent for Long-term Dialogue [63.65128176360345]
We introduce a model-agnostic framework, the Long-term Dialogue Agent (LD-Agent)<n>It incorporates three independently tunable modules dedicated to event perception, persona extraction, and response generation.<n>The effectiveness, generality, and cross-domain capabilities of LD-Agent are empirically demonstrated.
arXiv Detail & Related papers (2024-06-09T21:58:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.