PRIME: Large Language Model Personalization with Cognitive Memory and Thought Processes
- URL: http://arxiv.org/abs/2507.04607v2
- Date: Mon, 14 Jul 2025 05:54:45 GMT
- Title: PRIME: Large Language Model Personalization with Cognitive Memory and Thought Processes
- Authors: Xinliang Frederick Zhang, Nick Beauchamp, Lu Wang,
- Abstract summary: Large language model (LLM) personalization aims to align model outputs with individuals' unique preferences and opinions.<n>We introduce a unified framework, PRIME, using episodic and semantic memory mechanisms.<n>Experiments validate PRIME's effectiveness across both long- and short-context scenarios.
- Score: 6.631626634132574
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large language model (LLM) personalization aims to align model outputs with individuals' unique preferences and opinions. While recent efforts have implemented various personalization methods, a unified theoretical framework that can systematically understand the drivers of effective personalization is still lacking. In this work, we integrate the well-established cognitive dual-memory model into LLM personalization, by mirroring episodic memory to historical user engagements and semantic memory to long-term, evolving user beliefs. Specifically, we systematically investigate memory instantiations and introduce a unified framework, PRIME, using episodic and semantic memory mechanisms. We further augment PRIME with a novel personalized thinking capability inspired by the slow thinking strategy. Moreover, recognizing the absence of suitable benchmarks, we introduce a dataset using Change My View (CMV) from Reddit, specifically designed to evaluate long-context personalization. Extensive experiments validate PRIME's effectiveness across both long- and short-context scenarios. Further analysis confirms that PRIME effectively captures dynamic personalization beyond mere popularity biases.
Related papers
- PersonaAgent: When Large Language Model Agents Meet Personalization at Test Time [87.99027488664282]
PersonaAgent is a framework designed to address versatile personalization tasks.<n>It integrates a personalized memory module and a personalized action module.<n>Test-time user-preference alignment strategy ensures real-time user preference alignment.
arXiv Detail & Related papers (2025-06-06T17:29:49Z) - Embodied Agents Meet Personalization: Exploring Memory Utilization for Personalized Assistance [18.820008753896623]
Embodied agents empowered by large language models (LLMs) have shown strong performance in household object rearrangement tasks.<n>Yet, the effectiveness of embodied agents in utilizing memory for personalized assistance remains largely underexplored.<n>We present MEMENTO, a personalized embodied agent evaluation framework designed to assess memory utilization capabilities.
arXiv Detail & Related papers (2025-05-22T08:00:10Z) - A Personalized Conversational Benchmark: Towards Simulating Personalized Conversations [112.81207927088117]
PersonaConvBench is a benchmark for evaluating personalized reasoning and generation in multi-turn conversations with large language models (LLMs)<n>We benchmark several commercial and open-source LLMs under a unified prompting setup and observe that incorporating personalized history yields substantial performance improvements.
arXiv Detail & Related papers (2025-05-20T09:13:22Z) - Quantifying Memory Utilization with Effective State-Size [73.52115209375343]
We develop a measure of textitmemory utilization'<n>This metric is tailored to the fundamental class of systems with textitinput-invariant and textitinput-varying linear operators
arXiv Detail & Related papers (2025-04-28T08:12:30Z) - DRC: Enhancing Personalized Image Generation via Disentangled Representation Composition [69.10628479553709]
We introduce DRC, a novel personalized image generation framework that enhances Large Multimodal Models (LMMs)<n> DRC explicitly extracts user style preferences and semantic intentions from history images and the reference image, respectively.<n>It involves two critical learning stages: 1) Disentanglement learning, which employs a dual-tower disentangler to explicitly separate style and semantic features, optimized via a reconstruction-driven paradigm with difficulty-aware importance sampling; and 2) Personalized modeling, which applies semantic-preserving augmentations to effectively adapt the disentangled representations for robust personalized generation.
arXiv Detail & Related papers (2025-04-24T08:10:10Z) - Personalized Language Models via Privacy-Preserving Evolutionary Model Merging [57.161917758405465]
Personalization in large language models (LLMs) seeks to tailor models to individual user or user group preferences.<n>We propose Privacy-Preserving Model Merging via Evolutionary Algorithms (PriME)<n>PriME employs gradient-free methods to directly optimize task-specific metrics while preserving user privacy.
arXiv Detail & Related papers (2025-03-23T09:46:07Z) - Personalization Toolkit: Training Free Personalization of Large Vision Language Models [11.026377387506216]
This paper introduces a training-free approach to LVLM personalization by leveraging pre-trained vision foundation models.<n>Our model-agnostic vision toolkit enables flexible and efficient personalization without the need for extensive retraining.
arXiv Detail & Related papers (2025-02-04T16:19:20Z) - Personalized Large Language Models [1.0881867638866944]
This paper investigates methods to personalize large language models (LLMs)
Results demonstrate that personalized fine-tuning improves model reasoning compared to non-personalized models.
Experiments on datasets for emotion recognition and hate speech detection show consistent performance gains with personalized methods.
arXiv Detail & Related papers (2024-02-14T15:55:30Z) - Personalized Large Language Model Assistant with Evolving Conditional Memory [15.780762727225122]
We present a plug-and-play framework that could facilitate personalized large language model assistants with evolving conditional memory.
The personalized assistant focuses on intelligently preserving the knowledge and experience from the history dialogue with the user.
arXiv Detail & Related papers (2023-12-22T02:39:15Z) - Sequential Recommender via Time-aware Attentive Memory Network [67.26862011527986]
We propose a temporal gating methodology to improve attention mechanism and recurrent units.
We also propose a Multi-hop Time-aware Attentive Memory network to integrate long-term and short-term preferences.
Our approach is scalable for candidate retrieval tasks and can be viewed as a non-linear generalization of latent factorization for dot-product based Top-K recommendation.
arXiv Detail & Related papers (2020-05-18T11:29:38Z) - PeTra: A Sparsely Supervised Memory Model for People Tracking [50.98911178059019]
We propose PeTra, a memory-augmented neural network designed to track entities in its memory slots.
We empirically compare key modeling choices, finding that we can simplify several aspects of the design of the memory module while retaining strong performance.
PeTra is highly effective in both evaluations, demonstrating its ability to track people in its memory despite being trained with limited annotation.
arXiv Detail & Related papers (2020-05-06T17:45:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.