Customized Retrieval-Augmented Generation with LLM for Debiasing Recommendation Unlearning
- URL: http://arxiv.org/abs/2511.05494v1
- Date: Wed, 10 Sep 2025 08:49:58 GMT
- Title: Customized Retrieval-Augmented Generation with LLM for Debiasing Recommendation Unlearning
- Authors: Haichao Zhang, Chong Zhang, Peiyu Hu, Shi Qiu, Jia Wang,
- Abstract summary: CRAGRU is a novel framework for efficient, user-specific unlearning.<n>It mitigates unlearning bias while preserving recommendation quality.<n>Our work highlights the promise of RAG-based architectures for building robust and privacy-preserving recommender systems.
- Score: 11.187910465178078
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Modern recommender systems face a critical challenge in complying with privacy regulations like the 'right to be forgotten': removing a user's data without disrupting recommendations for others. Traditional unlearning methods address this by partial model updates, but introduce propagation bias--where unlearning one user's data distorts recommendations for behaviorally similar users, degrading system accuracy. While retraining eliminates bias, it is computationally prohibitive for large-scale systems. To address this challenge, we propose CRAGRU, a novel framework leveraging Retrieval-Augmented Generation (RAG) for efficient, user-specific unlearning that mitigates bias while preserving recommendation quality. CRAGRU decouples unlearning into distinct retrieval and generation stages. In retrieval, we employ three tailored strategies designed to precisely isolate the target user's data influence, minimizing collateral impact on unrelated users and enhancing unlearning efficiency. Subsequently, the generation stage utilizes an LLM, augmented with user profiles integrated into prompts, to reconstruct accurate and personalized recommendations without needing to retrain the entire base model. Experiments on three public datasets demonstrate that CRAGRU effectively unlearns targeted user data, significantly mitigating unlearning bias by preventing adverse impacts on non-target users, while maintaining recommendation performance comparable to fully trained original models. Our work highlights the promise of RAG-based architectures for building robust and privacy-preserving recommender systems. The source code is available at: https://github.com/zhanghaichao520/LLM_rec_unlearning.
Related papers
- Can Recommender Systems Teach Themselves? A Recursive Self-Improving Framework with Fidelity Control [82.30868101940068]
We propose a paradigm in which a model bootstraps its own performance without reliance on external data or teacher models.<n>Our theoretical analysis shows that RSIR acts as a data-driven implicit regularizer, smoothing the optimization landscape.<n>We show that even smaller models benefit, and weak models can generate effective training curricula for stronger ones.
arXiv Detail & Related papers (2026-02-17T15:31:32Z) - Towards a Real-World Aligned Benchmark for Unlearning in Recommender Systems [49.766845975588275]
We propose a set of design desiderata and research questions to guide the development of a more realistic benchmark for unlearning in recommender systems.<n>We argue for an unlearning setup that reflects the sequential, time-sensitive nature of real-world deletion requests.<n>We present a preliminary experiment in a next-basket recommendation setting based on our proposed desiderata and find that unlearning also works for sequential recommendation models.
arXiv Detail & Related papers (2025-08-23T16:05:40Z) - Pre-training for Recommendation Unlearning [14.514770044236375]
UnlearnRec is a model-agnostic pre-training paradigm that prepares systems for efficient unlearning operations.<n>Our method delivers exceptional unlearning effectiveness while providing more than 10x speedup compared to retraining approaches.
arXiv Detail & Related papers (2025-05-28T17:57:11Z) - A Novel Generative Model with Causality Constraint for Mitigating Biases in Recommender Systems [20.672668625179526]
Latent confounding bias can obscure the true causal relationship between user feedback and item exposure.<n>We propose a novel generative framework called Latent Causality Constraints for Debiasing representation learning in Recommender Systems.
arXiv Detail & Related papers (2025-05-22T14:09:39Z) - CURE4Rec: A Benchmark for Recommendation Unlearning with Deeper Influence [55.21518669075263]
CURE4Rec is the first comprehensive benchmark for recommendation unlearning evaluation.<n>We consider the deeper influence of unlearning on recommendation fairness and robustness towards data with varying impact levels.
arXiv Detail & Related papers (2024-08-26T16:21:50Z) - Ungeneralizable Examples [70.76487163068109]
Current approaches to creating unlearnable data involve incorporating small, specially designed noises.
We extend the concept of unlearnable data to conditional data learnability and introduce textbfUntextbfGeneralizable textbfExamples (UGEs)
UGEs exhibit learnability for authorized users while maintaining unlearnability for potential hackers.
arXiv Detail & Related papers (2024-04-22T09:29:14Z) - Personalized Negative Reservoir for Incremental Learning in Recommender Systems [20.346543608461204]
In commercial settings, once a recommendation system model has been trained and deployed it needs to be updated frequently as new client data arrive.<n>Naively fine-tuning solely on the new data runs into the well-documented problem of catastrophic forgetting.<n>We propose a personalized negative reservoir strategy, which is used to obtain negative samples for the standard triplet loss of graph-based recommendation systems.
arXiv Detail & Related papers (2024-03-06T19:08:28Z) - Making Users Indistinguishable: Attribute-wise Unlearning in Recommender
Systems [28.566330708233824]
We find that attackers can extract private information, i.e., gender, race, and age, from a trained model even if it has not been explicitly encountered during training.
To protect the sensitive attribute of users, Attribute Unlearning (AU) aims to degrade attacking performance and make target attributes indistinguishable.
arXiv Detail & Related papers (2023-10-06T09:36:44Z) - Rethinking Missing Data: Aleatoric Uncertainty-Aware Recommendation [59.500347564280204]
We propose a new Aleatoric Uncertainty-aware Recommendation (AUR) framework.
AUR consists of a new uncertainty estimator along with a normal recommender model.
As the chance of mislabeling reflects the potential of a pair, AUR makes recommendations according to the uncertainty.
arXiv Detail & Related papers (2022-09-22T04:32:51Z) - Debiasing Learning for Membership Inference Attacks Against Recommender
Systems [79.48353547307887]
Learned recommender systems may inadvertently leak information about their training data, leading to privacy violations.
We investigate privacy threats faced by recommender systems through the lens of membership inference.
We propose a Debiasing Learning for Membership Inference Attacks against recommender systems (DL-MIA) framework that has four main components.
arXiv Detail & Related papers (2022-06-24T17:57:34Z) - Recommendation Unlearning [27.99369346343332]
RecEraser is a general and efficient machine unlearning framework tailored to recommendation task.
We first design three novel data partition algorithms to divide training data into balanced groups based on their similarity.
Experimental results on three public benchmarks show that RecEraser can not only achieve efficient unlearning, but also outperform the state-of-the-art unlearning methods in terms of model utility.
arXiv Detail & Related papers (2022-01-18T08:43:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.