Personalized Graph-Based Retrieval for Large Language Models
- URL: http://arxiv.org/abs/2501.02157v2
- Date: Sat, 31 May 2025 06:33:39 GMT
- Title: Personalized Graph-Based Retrieval for Large Language Models
- Authors: Steven Au, Cameron J. Dimacali, Ojasmitha Pedirappagari, Namyong Park, Franck Dernoncourt, Yu Wang, Nikos Kanakaris, Hanieh Deilamsalehy, Ryan A. Rossi, Nesreen K. Ahmed,
- Abstract summary: We propose a framework that leverages user-centric knowledge graphs to enrich personalization.<n>By directly integrating structured user knowledge into the retrieval process and augmenting prompts with user-relevant context, PGraph enhances contextual understanding and output quality.<n>We also introduce the Personalized Graph-based Benchmark for Text Generation, designed to evaluate personalized text generation tasks in real-world settings where user history is sparse or unavailable.
- Score: 51.7278897841697
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As large language models (LLMs) evolve, their ability to deliver personalized and context-aware responses offers transformative potential for improving user experiences. Existing personalization approaches, however, often rely solely on user history to augment the prompt, limiting their effectiveness in generating tailored outputs, especially in cold-start scenarios with sparse data. To address these limitations, we propose Personalized Graph-based Retrieval-Augmented Generation (PGraphRAG), a framework that leverages user-centric knowledge graphs to enrich personalization. By directly integrating structured user knowledge into the retrieval process and augmenting prompts with user-relevant context, PGraphRAG enhances contextual understanding and output quality. We also introduce the Personalized Graph-based Benchmark for Text Generation, designed to evaluate personalized text generation tasks in real-world settings where user history is sparse or unavailable. Experimental results show that PGraphRAG significantly outperforms state-of-the-art personalization methods across diverse tasks, demonstrating the unique advantages of graph-based retrieval for personalization.
Related papers
- Synthetic Interaction Data for Scalable Personalization in Large Language Models [67.31884245564086]
We introduce a high-fidelity synthetic data generation framework called PersonaGym.<n>Unlike prior work that treats personalization as static persona-preference pairs, PersonaGym models a dynamic preference process.<n>We release PersonaAtlas, a large-scale, high-quality, and diverse synthetic dataset of high-fidelity multi-turn personalized interaction trajectories.
arXiv Detail & Related papers (2026-02-12T20:41:22Z) - Reasoning-Based Personalized Generation for Users with Sparse Data [120.94029850012045]
We introduce GraSPer, a novel framework for enhancing personalized text generation under sparse context.<n>GraSPer first augments user context by predicting items that the user would likely interact with in the future.<n>With reasoning alignment, it then generates texts for these interactions to enrich the augmented context.<n>In the end, it generates personalized outputs conditioned on both the real and synthetic histories.
arXiv Detail & Related papers (2026-01-31T01:54:23Z) - Personalized Reward Modeling for Text-to-Image Generation [9.780251969338044]
We present PIGReward, a personalized reward model that dynamically generates user-conditioned evaluation dimensions and assesses images through CoT reasoning.<n> PIGReward provides personalized feedback that drives user-specific prompt optimization, improving alignment between generated images and individual intent.<n>Extensive experiments demonstrate that PIGReward surpasses existing methods in both accuracy and interpretability.
arXiv Detail & Related papers (2025-11-21T12:04:24Z) - Personalize Before Retrieve: LLM-based Personalized Query Expansion for User-Centric Retrieval [34.298743064665395]
Personalize Before Retrieve (PBR) is a framework that incorporates user-specific signals into query expansion prior to retrieval.<n>PBR consistently outperforms strong baselines, with up to 10% gains on PersonaBench across retrievers.
arXiv Detail & Related papers (2025-10-10T02:24:09Z) - Personalized Vision via Visual In-Context Learning [62.85784251383279]
We present a visual in-context learning framework for personalized vision.<n>PICO infers the underlying transformation and applies it to new inputs without retraining.<n>We also propose an attention-guided seed scorer that improves reliability via efficient inference scaling.
arXiv Detail & Related papers (2025-09-29T17:58:45Z) - PREFINE: Personalized Story Generation via Simulated User Critics and User-Specific Rubric Generation [2.8324853634693614]
PREFINE is a novel framework that extends the Critique-and-Refine paradigm to personalization.<n> PREFINE constructs a pseudo-user agent from a user's interaction history and generates user-specific rubrics.<n>Our approach holds potential for enabling efficient personalization in broader applications, such as dialogue systems, education, and recommendation.
arXiv Detail & Related papers (2025-09-16T16:39:40Z) - Embedding-to-Prefix: Parameter-Efficient Personalization for Pre-Trained Large Language Models [6.445337954429245]
Large language models (LLMs) excel at generating contextually relevant content.<n>We propose Embedding-to-Prefix (E2P), a parameter-efficient method that injects context embeddings into an LLM's hidden representation space.<n>We evaluate E2P across two public datasets and in a production setting: dialogue personalization on Persona-Chat, contextual headline generation on PENS, and large-scale personalization for music and podcast consumption.
arXiv Detail & Related papers (2025-05-16T13:34:25Z) - Extracting Knowledge Graphs from User Stories using LangChain [0.0]
This thesis introduces a novel methodology for the automated generation of knowledge graphs from user stories by leveraging the advanced capabilities of Large Language Models.<n>The User Story Graph Transformer module was developed to extract nodes and relationships from user stories using an LLM to construct accurate knowledge graphs.
arXiv Detail & Related papers (2025-05-14T18:25:58Z) - Rehearse With User: Personalized Opinion Summarization via Role-Playing based on Large Language Models [29.870187698924852]
Large language models face difficulties in personalized tasks involving long texts.
Having the model act as the user, the model can better understand the user's personalized needs.
Our method can effectively improve the level of personalization in large model-generated summaries.
arXiv Detail & Related papers (2025-03-01T11:05:01Z) - TOBUGraph: Knowledge Graph-Based Retrieval for Enhanced LLM Performance Beyond RAG [3.8704987495086542]
TOBUGraph is a graph-based retrieval framework that first constructs the knowledge graph from unstructured data.
It extracts structured knowledge and diverse relationships among data, going beyond RAG's text-to-text similarity.
We demonstrate TOBUGraph's effectiveness in TOBU, a real-world application in production for personal memory organization and retrieval.
arXiv Detail & Related papers (2024-12-06T22:05:39Z) - Guided Profile Generation Improves Personalization with LLMs [3.2685922749445617]
In modern commercial systems, including Recommendation, Ranking, and E-Commerce platforms, there is a trend towards incorporating Personalization context as input into Large Language Models (LLMs)
We propose Guided Profile Generation (GPG), a general method designed to generate personal profiles in natural language.
Our experimental results show that GPG improves LLM's personalization ability across different tasks, for example, it increases 37% accuracy in predicting personal preference compared to directly feeding the LLMs with raw personal context.
arXiv Detail & Related papers (2024-09-19T21:29:56Z) - Step-Back Profiling: Distilling User History for Personalized Scientific Writing [50.481041470669766]
Large language models (LLM) excel at a variety of natural language processing tasks, yet they struggle to generate personalized content for individuals.
We introduce STEP-BACK PROFILING to personalize LLMs by distilling user history into concise profiles.
Our approach outperforms the baselines by up to 3.6 points on the general personalization benchmark.
arXiv Detail & Related papers (2024-06-20T12:58:26Z) - Persona-DB: Efficient Large Language Model Personalization for Response Prediction with Collaborative Data Refinement [79.2400720115588]
We introduce Persona-DB, a simple yet effective framework consisting of a hierarchical construction process to improve generalization across task contexts.
In the evaluation of response prediction, Persona-DB demonstrates superior context efficiency in maintaining accuracy with a significantly reduced retrieval size.
Our experiments also indicate a marked improvement of over 10% under cold-start scenarios, when users have extremely sparse data.
arXiv Detail & Related papers (2024-02-16T20:20:43Z) - Leveraging Large Language Models for Node Generation in Few-Shot Learning on Text-Attributed Graphs [5.587264586806575]
We propose a plug-and-play approach to empower text-attributed graphs through node generation using Large Language Models (LLMs)<n>LLMs extract semantic information from labels and generate samples that belong to categories as exemplars.<n>We employ an edge predictor to capture structural information inherent in the raw dataset and integrate the newly generated samples into the original graph.
arXiv Detail & Related papers (2023-10-15T16:04:28Z) - Exploring Large Language Model for Graph Data Understanding in Online
Job Recommendations [63.19448893196642]
We present a novel framework that harnesses the rich contextual information and semantic representations provided by large language models to analyze behavior graphs.
By leveraging this capability, our framework enables personalized and accurate job recommendations for individual users.
arXiv Detail & Related papers (2023-07-10T11:29:41Z) - Unsupervised Neural Stylistic Text Generation using Transfer learning
and Adapters [66.17039929803933]
We propose a novel transfer learning framework which updates only $0.3%$ of model parameters to learn style specific attributes for response generation.
We learn style specific attributes from the PERSONALITY-CAPTIONS dataset.
arXiv Detail & Related papers (2022-10-07T00:09:22Z) - Scene Graph Modification as Incremental Structure Expanding [61.84291817776118]
We focus on scene graph modification (SGM), where the system is required to learn how to update an existing scene graph based on a natural language query.
We frame SGM as a graph expansion task by introducing the incremental structure expanding (ISE)
We construct a challenging dataset that contains more complicated queries and larger scene graphs than existing datasets.
arXiv Detail & Related papers (2022-09-15T16:26:14Z) - Incremental user embedding modeling for personalized text classification [12.381095398791352]
Individual user profiles and interaction histories play a significant role in providing customized experiences in real-world applications.
We propose an incremental user embedding modeling approach, in which embeddings of user's recent interaction histories are dynamically integrated into the accumulated history vectors.
We demonstrate the effectiveness of this approach by applying it to a personalized multi-class classification task based on the Reddit dataset.
arXiv Detail & Related papers (2022-02-13T17:33:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.