MemWeaver: A Hierarchical Memory from Textual Interactive Behaviors for Personalized Generation
- URL: http://arxiv.org/abs/2510.07713v1
- Date: Thu, 09 Oct 2025 02:47:21 GMT
- Title: MemWeaver: A Hierarchical Memory from Textual Interactive Behaviors for Personalized Generation
- Authors: Shuo Yu, Mingyue Cheng, Daoyu Wang, Qi Liu, Zirui Liu, Ze Guo, Xiaoyu Tao,
- Abstract summary: We propose a framework that weaves the user's entire textual history into a hierarchical memory to power deeply personalized generation.<n>MemWeaver builds two complementary memory components that both integrate temporal and semantic information.
- Score: 12.075641773020152
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The primary form of user-internet engagement is shifting from leveraging implicit feedback signals, such as browsing and clicks, to harnessing the rich explicit feedback provided by textual interactive behaviors. This shift unlocks a rich source of user textual history, presenting a profound opportunity for a deeper form of personalization. However, prevailing approaches offer only a shallow form of personalization, as they treat user history as a flat list of texts for retrieval and fail to model the rich temporal and semantic structures reflecting dynamic nature of user interests. In this work, we propose \textbf{MemWeaver}, a framework that weaves the user's entire textual history into a hierarchical memory to power deeply personalized generation. The core innovation of our memory lies in its ability to capture both the temporal evolution of interests and the semantic relationships between different activities. To achieve this, MemWeaver builds two complementary memory components that both integrate temporal and semantic information, but at different levels of abstraction: behavioral memory, which captures specific user actions, and cognitive memory, which represents long-term preferences. This dual-component memory serves as a unified representation of the user, allowing large language models (LLMs) to reason over both concrete behaviors and abstracted traits. Experiments on the Language Model Personalization (LaMP) benchmark validate the efficacy of MemWeaver. Our code is available\footnote{https://github.com/fishsure/MemWeaver}.
Related papers
- OP-Bench: Benchmarking Over-Personalization for Memory-Augmented Personalized Conversational Agents [55.27061195244624]
We formalize over-personalization into three types: Irrelevance, Repetition, and Sycophancy.<n>Agents tend to retrieve and over-attend to user memories even when unnecessary.<n>Our work takes an initial step toward more controllable and appropriate personalization in memory-augmented dialogue systems.
arXiv Detail & Related papers (2026-01-20T08:27:13Z) - The AI Hippocampus: How Far are We From Human Memory? [77.04745635827278]
Implicit memory refers to the knowledge embedded within the internal parameters of pre-trained transformers.<n>Explicit memory involves external storage and retrieval components designed to augment model outputs with dynamic, queryable knowledge representations.<n>Agentic memory introduces persistent, temporally extended memory structures within autonomous agents.
arXiv Detail & Related papers (2026-01-14T03:24:08Z) - Beyond Dialogue Time: Temporal Semantic Memory for Personalized LLM Agents [68.84161689205779]
Temporal Semantic Memory (TSM) is a memory framework that models semantic time for point-wise memory.<n>TSM consistently outperforms existing methods and achieves up to 12.2% absolute improvement in accuracy.
arXiv Detail & Related papers (2026-01-12T12:24:44Z) - Leveraging Scene Context with Dual Networks for Sequential User Behavior Modeling [58.72480539725212]
We propose a novel Dual Sequence Prediction networks (DSPnet) to capture the dynamic interests and interplay between scenes and items for future behavior prediction.<n>DSPnet consists of two parallel networks dedicated to learn users' dynamic interests over items and scenes, and a sequence feature enhancement module to capture the interplay for enhanced future behavior prediction.
arXiv Detail & Related papers (2025-09-30T12:26:57Z) - PRIME: Large Language Model Personalization with Cognitive Dual-Memory and Personalized Thought Process [6.27203886967197]
Large language model (LLM) personalization aims to align model outputs with individuals' unique preferences and opinions.<n>We introduce a unified framework, PRIME, using episodic and semantic memory mechanisms.<n>Experiments validate PRIME's effectiveness across both long- and short-context scenarios.
arXiv Detail & Related papers (2025-07-07T01:54:34Z) - Embodied Agents Meet Personalization: Investigating Challenges and Solutions Through the Lens of Memory Utilization [26.34637576545121]
LLM-powered embodied agents have shown success on conventional object-rearrangement tasks.<n>However, providing personalized assistance that leverages user-specific knowledge from past interactions presents new challenges.<n>We investigate these challenges through the lens of agents' memory utilization.
arXiv Detail & Related papers (2025-05-22T08:00:10Z) - Dynamic Textual Prompt For Rehearsal-free Lifelong Person Re-identification [30.782126710974165]
Lifelong person re-identification attempts to recognize people across cameras and integrate new knowledge from continuous data streams.
Key challenges involve addressing catastrophic forgetting caused by parameter updating and domain shift.
We propose using textual descriptions as guidance to encourage the ReID model to learn cross-domain invariant features without retaining samples.
arXiv Detail & Related papers (2024-11-09T00:57:19Z) - Personalized Large Language Model Assistant with Evolving Conditional Memory [15.780762727225122]
We present a plug-and-play framework that could facilitate personalized large language model assistants with evolving conditional memory.
The personalized assistant focuses on intelligently preserving the knowledge and experience from the history dialogue with the user.
arXiv Detail & Related papers (2023-12-22T02:39:15Z) - Black-box Unsupervised Domain Adaptation with Bi-directional
Atkinson-Shiffrin Memory [59.51934126717572]
Black-box unsupervised domain adaptation (UDA) learns with source predictions of target data without accessing either source data or source models during training.
We propose BiMem, a bi-directional memorization mechanism that learns to remember useful and representative information to correct noisy pseudo labels on the fly.
BiMem achieves superior domain adaptation performance consistently across various visual recognition tasks such as image classification, semantic segmentation and object detection.
arXiv Detail & Related papers (2023-08-25T08:06:48Z) - LaMemo: Language Modeling with Look-Ahead Memory [50.6248714811912]
We propose Look-Ahead Memory (LaMemo) that enhances the recurrence memory by incrementally attending to the right-side tokens.
LaMemo embraces bi-directional attention and segment recurrence with an additional overhead only linearly proportional to the memory length.
Experiments on widely used language modeling benchmarks demonstrate its superiority over the baselines equipped with different types of memory.
arXiv Detail & Related papers (2022-04-15T06:11:25Z) - Dialogue History Matters! Personalized Response Selectionin Multi-turn
Retrieval-based Chatbots [62.295373408415365]
We propose a personalized hybrid matching network (PHMN) for context-response matching.
Our contributions are two-fold: 1) our model extracts personalized wording behaviors from user-specific dialogue history as extra matching information.
We evaluate our model on two large datasets with user identification, i.e., personalized dialogue Corpus Ubuntu (P- Ubuntu) and personalized Weibo dataset (P-Weibo)
arXiv Detail & Related papers (2021-03-17T09:42:11Z) - Learning to Learn Variational Semantic Memory [132.39737669936125]
We introduce variational semantic memory into meta-learning to acquire long-term knowledge for few-shot learning.
The semantic memory is grown from scratch and gradually consolidated by absorbing information from tasks it experiences.
We formulate memory recall as the variational inference of a latent memory variable from addressed contents.
arXiv Detail & Related papers (2020-10-20T15:05:26Z) - Sequential Recommender via Time-aware Attentive Memory Network [67.26862011527986]
We propose a temporal gating methodology to improve attention mechanism and recurrent units.
We also propose a Multi-hop Time-aware Attentive Memory network to integrate long-term and short-term preferences.
Our approach is scalable for candidate retrieval tasks and can be viewed as a non-linear generalization of latent factorization for dot-product based Top-K recommendation.
arXiv Detail & Related papers (2020-05-18T11:29:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.