Avoiding Over-Personalization with Rule-Guided Knowledge Graph Adaptation for LLM Recommendations
- URL: http://arxiv.org/abs/2509.07133v1
- Date: Mon, 08 Sep 2025 18:33:36 GMT
- Title: Avoiding Over-Personalization with Rule-Guided Knowledge Graph Adaptation for LLM Recommendations
- Authors: Fernando Spadea, Oshani Seneviratne,
- Abstract summary: We present a neuro-symbolic framework to restructure over-personalization in LLM-based recommender systems.<n>We adapt user-side Knowledge Graphs (KGs) at inference time to suppress feature co-occurrence patterns.<n>These adapted PKGs are used to construct structured prompts that steer the language model toward more diverse, Out-PIE recommendations.
- Score: 46.90931293070464
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a lightweight neuro-symbolic framework to mitigate over-personalization in LLM-based recommender systems by adapting user-side Knowledge Graphs (KGs) at inference time. Instead of retraining models or relying on opaque heuristics, our method restructures a user's Personalized Knowledge Graph (PKG) to suppress feature co-occurrence patterns that reinforce Personalized Information Environments (PIEs), i.e., algorithmically induced filter bubbles that constrain content diversity. These adapted PKGs are used to construct structured prompts that steer the language model toward more diverse, Out-PIE recommendations while preserving topical relevance. We introduce a family of symbolic adaptation strategies, including soft reweighting, hard inversion, and targeted removal of biased triples, and a client-side learning algorithm that optimizes their application per user. Experiments on a recipe recommendation benchmark show that personalized PKG adaptations significantly increase content novelty while maintaining recommendation quality, outperforming global adaptation and naive prompt-based methods.
Related papers
- One Adapts to Any: Meta Reward Modeling for Personalized LLM Alignment [55.86333374784959]
We argue that addressing these constraints requires a paradigm shift from fitting data to learn user preferences to learn the process of preference adaptation.<n>We propose Meta Reward Modeling (MRM), which reformulates personalized reward modeling as a meta-learning problem.<n>We show that MRM enhances few-shot personalization, improves user robustness, and consistently outperforms baselines.
arXiv Detail & Related papers (2026-01-26T17:55:52Z) - Generative Actor Critic [74.04971271003869]
Generative Actor Critic (GAC) is a novel framework that decouples sequential decision-making by reframing textitpolicy evaluation as learning a generative model of the joint distribution over trajectories and returns.<n>Experiments on Gym-MuJoCo and Maze2D benchmarks demonstrate GAC's strong offline performance and significantly enhanced offline-to-online improvement compared to state-of-the-art methods.
arXiv Detail & Related papers (2025-12-25T06:31:11Z) - A Model-agnostic Strategy to Mitigate Embedding Degradation in Personalized Federated Recommendation [34.915843795521134]
We propose a novel model-agnostic strategy for FedRec to strengthen the personalized embedding utility.<n>PLGC is the first research in federated recommendation to alleviate the dimensional collapse issue.
arXiv Detail & Related papers (2025-08-27T06:03:52Z) - What Makes LLMs Effective Sequential Recommenders? A Study on Preference Intensity and Temporal Context [56.590259941275434]
RecPO is a preference optimization framework for sequential recommendation.<n>It exploits adaptive reward margins based on inferred preference hierarchies and temporal signals.<n>It mirrors key characteristics of human decision-making: favoring timely satisfaction, maintaining coherent preferences, and exercising discernment under shifting contexts.
arXiv Detail & Related papers (2025-06-02T21:09:29Z) - Graph Retrieval-Augmented LLM for Conversational Recommendation Systems [52.35491420330534]
G-CRS (Graph Retrieval-Augmented Large Language Model for Conversational Recommender Systems) is a training-free framework that combines graph retrieval-augmented generation and in-context learning.<n>G-CRS achieves superior recommendation performance compared to existing methods without requiring task-specific training.
arXiv Detail & Related papers (2025-03-09T03:56:22Z) - Enhancing Recommendation Systems with GNNs and Addressing Over-Smoothing [7.06152589784002]
This paper addresses key challenges in enhancing recommendation systems by leveraging Graph Neural Networks (GNNs)<n>The proposed approach introduces three GNN-based recommendation models, specifically designed to mitigate over-smoothing.<n>The study emphasizes the critical need for interpretability in recommendation systems, aiming to provide transparent and justifiable suggestions.
arXiv Detail & Related papers (2024-12-04T07:50:27Z) - Unveiling User Preferences: A Knowledge Graph and LLM-Driven Approach for Conversational Recommendation [55.5687800992432]
We propose a plug-and-play framework that synergizes Large Language Models (LLMs) and Knowledge Graphs (KGs) to unveil user preferences.<n>This enables the LLM to transform KG entities into concise natural language descriptions, allowing them to comprehend domain-specific knowledge.
arXiv Detail & Related papers (2024-11-16T11:47:21Z) - RosePO: Aligning LLM-based Recommenders with Human Values [38.029251417802044]
We propose a general framework -- Recommendation with smoothing personalized Preference Optimization (RosePO)
RosePO better aligns with customized human values during the post-training stage.
Evaluation on three real-world datasets demonstrates the effectiveness of our method.
arXiv Detail & Related papers (2024-10-16T12:54:34Z) - LLM-Powered Explanations: Unraveling Recommendations Through Subgraph Reasoning [40.53821858897774]
We introduce a novel recommender that synergies Large Language Models (LLMs) and Knowledge Graphs (KGs) to enhance the recommendation and provide interpretable results.
Our approach significantly enhances both the effectiveness and interpretability of recommender systems.
arXiv Detail & Related papers (2024-06-22T14:14:03Z) - Linear Alignment: A Closed-form Solution for Aligning Human Preferences without Tuning and Feedback [70.32795295142648]
Linear alignment is a novel algorithm that aligns language models with human preferences in one single inference step.
Experiments on both general and personalized preference datasets demonstrate that linear alignment significantly enhances the performance and efficiency of LLM alignment.
arXiv Detail & Related papers (2024-01-21T10:46:23Z) - AURO: Reinforcement Learning for Adaptive User Retention Optimization in Recommender Systems [25.18963930580529]
Reinforcement Learning (RL) has garnered increasing attention for its ability to optimize user retention in recommender systems.<n>This paper introduces a novel approach called textbfAdaptive textbfUser textbfRetention textbfOptimization (AURO) to address this challenge.
arXiv Detail & Related papers (2023-10-06T02:45:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.