Recommendation Unlearning via Matrix Correction
- URL: http://arxiv.org/abs/2307.15960v1
- Date: Sat, 29 Jul 2023 11:36:38 GMT
- Title: Recommendation Unlearning via Matrix Correction
- Authors: Jiahao Liu, Dongsheng Li, Hansu Gu, Tun Lu, Jiongran Wu, Peng Zhang,
Li Shang, Ning Gu
- Abstract summary: We propose an Interaction and Mapping Matrices Correction (IMCorrect) method for recommendation unlearning.
We show that IMCorrect is superior in completeness, utility, and efficiency, and is applicable in many recommendation unlearning scenarios.
- Score: 17.457533987238975
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recommender systems are important for providing personalized services to
users, but the vast amount of collected user data has raised concerns about
privacy (e.g., sensitive data), security (e.g., malicious data) and utility
(e.g., toxic data). To address these challenges, recommendation unlearning has
emerged as a promising approach, which allows specific data and models to be
forgotten, mitigating the risks of sensitive/malicious/toxic user data.
However, existing methods often struggle to balance completeness, utility, and
efficiency, i.e., compromising one for the other, leading to suboptimal
recommendation unlearning. In this paper, we propose an Interaction and Mapping
Matrices Correction (IMCorrect) method for recommendation unlearning. Firstly,
we reveal that many collaborative filtering (CF) algorithms can be formulated
as mapping-based approach, in which the recommendation results can be obtained
by multiplying the user-item interaction matrix with a mapping matrix. Then,
IMCorrect can achieve efficient recommendation unlearning by correcting the
interaction matrix and enhance the completeness and utility by correcting the
mapping matrix, all without costly model retraining. Unlike existing methods,
IMCorrect is a whitebox model that offers greater flexibility in handling
various recommendation unlearning scenarios. Additionally, it has the unique
capability of incrementally learning from new data, which further enhances its
practicality. We conducted comprehensive experiments to validate the
effectiveness of IMCorrect and the results demonstrate that IMCorrect is
superior in completeness, utility, and efficiency, and is applicable in many
recommendation unlearning scenarios.
Related papers
- Efficient and Robust Regularized Federated Recommendation [52.24782464815489]
The recommender system (RSRS) addresses both user preference and privacy concerns.
We propose a novel method that incorporates non-uniform gradient descent to improve communication efficiency.
RFRecF's superior robustness compared to diverse baselines.
arXiv Detail & Related papers (2024-11-03T12:10:20Z) - CURE4Rec: A Benchmark for Recommendation Unlearning with Deeper Influence [55.21518669075263]
CURE4Rec is the first comprehensive benchmark for recommendation unlearning evaluation.
We consider the deeper influence of unlearning on recommendation fairness and robustness towards data with varying impact levels.
arXiv Detail & Related papers (2024-08-26T16:21:50Z) - Data Imputation using Large Language Model to Accelerate Recommendation System [3.853804391135035]
We propose a novel approach that fine-tune Large Language Model (LLM) and use it impute missing data for recommendation systems.
LLM which is trained on vast amounts of text, is able to understand complex relationship among data and intelligently fill in missing information.
This enriched data is then used by the recommendation system to generate more accurate and personalized suggestions.
arXiv Detail & Related papers (2024-07-14T04:53:36Z) - CF Recommender System Based on Ontology and Nonnegative Matrix Factorization (NMF) [0.0]
This work is to address the recommender system's data sparsity and accuracy problems.
The implemented approach efficiently reduces the sparsity of CF suggestions, improves their accuracy, and gives more relevant items as recommendations.
arXiv Detail & Related papers (2024-05-31T14:50:53Z) - Optimal Baseline Corrections for Off-Policy Contextual Bandits [61.740094604552475]
We aim to learn decision policies that optimize an unbiased offline estimate of an online reward metric.
We propose a single framework built on their equivalence in learning scenarios.
Our framework enables us to characterize the variance-optimal unbiased estimator and provide a closed-form solution for it.
arXiv Detail & Related papers (2024-05-09T12:52:22Z) - Unlearn What You Want to Forget: Efficient Unlearning for LLMs [92.51670143929056]
Large language models (LLMs) have achieved significant progress from pre-training on and memorizing a wide range of textual data.
This process might suffer from privacy issues and violations of data protection regulations.
We propose an efficient unlearning framework that could efficiently update LLMs without having to retrain the whole model after data removals.
arXiv Detail & Related papers (2023-10-31T03:35:59Z) - Efficient Online Reinforcement Learning with Offline Data [78.92501185886569]
We show that we can simply apply existing off-policy methods to leverage offline data when learning online.
We extensively ablate these design choices, demonstrating the key factors that most affect performance.
We see that correct application of these simple recommendations can provide a $mathbf2.5times$ improvement over existing approaches.
arXiv Detail & Related papers (2023-02-06T17:30:22Z) - Adapting Triplet Importance of Implicit Feedback for Personalized
Recommendation [43.85549591503592]
Implicit feedback is frequently used for developing personalized recommendation services.
We propose a novel training framework named Triplet Importance Learning (TIL), which adaptively learns the importance score of training triplets.
We show that our proposed method outperforms the best existing models by 3-21% in terms of Recall@k for the top-k recommendation.
arXiv Detail & Related papers (2022-08-02T19:44:47Z) - Top-N Recommendation with Counterfactual User Preference Simulation [26.597102553608348]
Top-N recommendation, which aims to learn user ranking-based preference, has long been a fundamental problem in a wide range of applications.
In this paper, we propose to reformulate the recommendation task within the causal inference framework to handle the data scarce problem.
arXiv Detail & Related papers (2021-09-02T14:28:46Z) - S^3-Rec: Self-Supervised Learning for Sequential Recommendation with
Mutual Information Maximization [104.87483578308526]
We propose the model S3-Rec, which stands for Self-Supervised learning for Sequential Recommendation.
For our task, we devise four auxiliary self-supervised objectives to learn the correlations among attribute, item, subsequence, and sequence.
Extensive experiments conducted on six real-world datasets demonstrate the superiority of our proposed method over existing state-of-the-art methods.
arXiv Detail & Related papers (2020-08-18T11:44:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.