Shielded RecRL: Explanation Generation for Recommender Systems without Ranking Degradation
- URL: http://arxiv.org/abs/2601.03608v1
- Date: Tue, 14 Oct 2025 16:37:48 GMT
- Title: Shielded RecRL: Explanation Generation for Recommender Systems without Ranking Degradation
- Authors: Ansh Tiwari, Ayush Chauhan,
- Abstract summary: Shielded RecRL is a reinforcement learning approach to generate personalized explanations for recommender systems.<n>Our results show that Shielded RecRL enhances user-facing aspects of recommendations through rich, personalized explanations without degrading core recommendation accuracy.
- Score: 0.17188280334580197
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce Shielded RecRL, a reinforcement learning approach to generate personalized explanations for recommender systems without sacrificing the system's original ranking performance. Unlike prior RLHF-based recommender methods that directly optimize item rankings, our two-tower architecture keeps the recommender's ranking model intact while a language model learns to produce helpful explanations. We design a composite reward signal combining explanation length, content relevance, and coherence, and apply proximal policy optimization (PPO) with a KL-divergence constraint to fine-tune a large language model with only 0.4% of its parameters trainable via LoRA adapters. In experiments on an Amazon Books dataset (approximately 50K interactions in the fantasy and romance genres), Shielded RecRL improved the relative click-through rate (CTR) by 22.5% (1.225x over baseline) while keeping the recommender's item-ranking behavior virtually unchanged. An extensive ablation study confirms that our gradient shielding strategy and reward design effectively balance explanation quality and policy drift. Our results demonstrate that Shielded RecRL enhances user-facing aspects of recommendations through rich, personalized explanations without degrading core recommendation accuracy.
Related papers
- Deep Reinforcement Learning for Ranking Utility Tuning in the Ad Recommender System at Pinterest [10.816672840498079]
The ranking utility function in an ad recommender system plays a central role in balancing values across the platform, advertisers, and users.<n>Traditional manual tuning, while offering simplicity and interpretability, often yields suboptimal results.<n>We propose a general Deep Reinforcement Learning framework for personalized utility tuning.
arXiv Detail & Related papers (2025-09-05T17:57:45Z) - End-to-End Personalization: Unifying Recommender Systems with Large Language Models [0.0]
We propose a novel hybrid recommendation framework that combines Graph Attention Networks (GATs) with Large Language Models (LLMs)<n>LLMs are first used to enrich user and item representations by generating semantically meaningful profiles based on metadata such as titles, genres, and overviews.<n>We evaluate our model on benchmark datasets, including MovieLens 100k and 1M, where it consistently outperforms strong baselines.
arXiv Detail & Related papers (2025-08-02T22:46:50Z) - On Symmetric Losses for Robust Policy Optimization with Noisy Preferences [55.8615920580824]
This work focuses on reward modeling, a core component in reinforcement learning from human feedback.<n>We propose a principled framework for robust policy optimization under noisy preferences.<n>We prove that symmetric losses enable successful policy optimization even under noisy labels.
arXiv Detail & Related papers (2025-05-30T15:30:43Z) - R$^2$ec: Towards Large Recommender Models with Reasoning [59.32598867813266]
We propose R$2$ec, a unified large recommender model with intrinsic reasoning capability.<n>R$2$ec introduces a dual-head architecture that supports both reasoning chain generation and efficient item prediction in a single model.<n>To overcome the lack of annotated reasoning data, we design RecPO, a reinforcement learning framework.
arXiv Detail & Related papers (2025-05-22T17:55:43Z) - On a Scale-Invariant Approach to Bundle Recommendations in Candy Crush Saga [0.0]
This paper illustrates the use of attentive models for producing item recommendations in a mobile game scenario.
The methodology is subsequently applied to a bundle recommendation in Candy Crush Saga.
We have demonstrated that the recommendation enhances user engagement by 30% concerning click rate and by more than 40% concerning take rate.
arXiv Detail & Related papers (2024-08-13T10:42:32Z) - Adaptive Preference Scaling for Reinforcement Learning with Human Feedback [103.36048042664768]
Reinforcement learning from human feedback (RLHF) is a prevalent approach to align AI systems with human values.
We propose a novel adaptive preference loss, underpinned by distributionally robust optimization (DRO)
Our method is versatile and can be readily adapted to various preference optimization frameworks.
arXiv Detail & Related papers (2024-06-04T20:33:22Z) - Self-Play Preference Optimization for Language Model Alignment [75.83359213697854]
Recent advancements suggest that directly working with preference probabilities can yield a more accurate reflection of human preferences.
We propose a self-play-based method for language model alignment, which treats the problem as a constant-sum two-player game.
Our approach, dubbed Self-Play Preference Optimization (SPPO), utilizes iterative policy updates to provably approximate the Nash equilibrium.
arXiv Detail & Related papers (2024-05-01T17:59:20Z) - Enhancing Sequential Recommender with Large Language Models for Joint Video and Comment Recommendation [77.42486522565295]
We propose a novel recommendation approach called LSVCR to jointly perform personalized video and comment recommendation.<n>Our approach comprises two key components: sequential recommendation (SR) model and supplemental large language model (LLM) recommender.<n>In particular, we attain a cumulative gain of 4.13% in comment watch time.
arXiv Detail & Related papers (2024-03-20T13:14:29Z) - Adversarial Batch Inverse Reinforcement Learning: Learn to Reward from
Imperfect Demonstration for Interactive Recommendation [23.048841953423846]
We focus on the problem of learning to reward, which is fundamental to reinforcement learning.
Previous approaches either introduce additional procedures for learning to reward, thereby increasing the complexity of optimization.
We propose a novel batch inverse reinforcement learning paradigm that achieves the desired properties.
arXiv Detail & Related papers (2023-10-30T13:43:20Z) - Recommendation Systems with Distribution-Free Reliability Guarantees [83.80644194980042]
We show how to return a set of items rigorously guaranteed to contain mostly good items.
Our procedure endows any ranking model with rigorous finite-sample control of the false discovery rate.
We evaluate our methods on the Yahoo! Learning to Rank and MSMarco datasets.
arXiv Detail & Related papers (2022-07-04T17:49:25Z) - Debiasing Learning for Membership Inference Attacks Against Recommender
Systems [79.48353547307887]
Learned recommender systems may inadvertently leak information about their training data, leading to privacy violations.
We investigate privacy threats faced by recommender systems through the lens of membership inference.
We propose a Debiasing Learning for Membership Inference Attacks against recommender systems (DL-MIA) framework that has four main components.
arXiv Detail & Related papers (2022-06-24T17:57:34Z) - Self-Supervised Reinforcement Learning for Recommender Systems [77.38665506495553]
We propose self-supervised reinforcement learning for sequential recommendation tasks.
Our approach augments standard recommendation models with two output layers: one for self-supervised learning and the other for RL.
Based on such an approach, we propose two frameworks namely Self-Supervised Q-learning(SQN) and Self-Supervised Actor-Critic(SAC)
arXiv Detail & Related papers (2020-06-10T11:18:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.