Making Recommender Systems Forget: Learning and Unlearning for Erasable
Recommendation
- URL: http://arxiv.org/abs/2203.11491v1
- Date: Tue, 22 Mar 2022 06:56:06 GMT
- Title: Making Recommender Systems Forget: Learning and Unlearning for Erasable
Recommendation
- Authors: Yuyuan Li, Xiaolin Zheng, Chaochao Chen, Junlin Liu
- Abstract summary: LASER can not only achieve efficient unlearning, but also outperform the state-of-the-art unlearning framework in terms of model utility.
Both theoretical analysis and experiments on two real-world datasets demonstrate that LASER can not only achieve efficient unlearning, but also outperform the state-of-the-art unlearning framework in terms of model utility.
- Score: 18.72554870460794
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Privacy laws and regulations enforce data-driven systems, e.g., recommender
systems, to erase the data that concern individuals. As machine learning models
potentially memorize the training data, data erasure should also unlearn the
data lineage in models, which raises increasing interest in the problem of
Machine Unlearning (MU). However, existing MU methods cannot be directly
applied into recommendation. The basic idea of most recommender systems is
collaborative filtering, but existing MU methods ignore the collaborative
information across users and items. In this paper, we propose a general
erasable recommendation framework, namely LASER, which consists of Group module
and SeqTrain module. Firstly, Group module partitions users into balanced
groups based on their similarity of collaborative embedding learned via
hypergraph. Then SeqTrain module trains the model sequentially on all groups
with curriculum learning. Both theoretical analysis and experiments on two
real-world datasets demonstrate that LASER can not only achieve efficient
unlearning, but also outperform the state-of-the-art unlearning framework in
terms of model utility.
Related papers
- Attribute-to-Delete: Machine Unlearning via Datamodel Matching [65.13151619119782]
Machine unlearning -- efficiently removing a small "forget set" training data on a pre-divertrained machine learning model -- has recently attracted interest.
Recent research shows that machine unlearning techniques do not hold up in such a challenging setting.
arXiv Detail & Related papers (2024-10-30T17:20:10Z) - Data Debiasing with Datamodels (D3M): Improving Subgroup Robustness via Data Selection [80.85902083005237]
We introduce Data Debiasing with Datamodels (D3M), a debiasing approach which isolates and removes specific training examples that drive the model's failures on minority groups.
arXiv Detail & Related papers (2024-06-24T17:51:01Z) - Update Selective Parameters: Federated Machine Unlearning Based on Model Explanation [46.86767774669831]
We propose a more effective and efficient federated unlearning scheme based on the concept of model explanation.
We select the most influential channels within an already-trained model for the data that need to be unlearned.
arXiv Detail & Related papers (2024-06-18T11:43:20Z) - Unlearn What You Want to Forget: Efficient Unlearning for LLMs [92.51670143929056]
Large language models (LLMs) have achieved significant progress from pre-training on and memorizing a wide range of textual data.
This process might suffer from privacy issues and violations of data protection regulations.
We propose an efficient unlearning framework that could efficiently update LLMs without having to retrain the whole model after data removals.
arXiv Detail & Related papers (2023-10-31T03:35:59Z) - Can Bad Teaching Induce Forgetting? Unlearning in Deep Networks using an
Incompetent Teacher [6.884272840652062]
We propose a novel machine unlearning method by exploring the utility of competent and incompetent teachers in a student-teacher framework to induce forgetfulness.
The knowledge from the competent and incompetent teachers is selectively transferred to the student to obtain a model that doesn't contain any information about the forget data.
We introduce the zero forgetting (ZRF) metric to evaluate any unlearning method.
arXiv Detail & Related papers (2022-05-17T05:13:17Z) - Recommendation Unlearning [27.99369346343332]
RecEraser is a general and efficient machine unlearning framework tailored to recommendation task.
We first design three novel data partition algorithms to divide training data into balanced groups based on their similarity.
Experimental results on three public benchmarks show that RecEraser can not only achieve efficient unlearning, but also outperform the state-of-the-art unlearning methods in terms of model utility.
arXiv Detail & Related papers (2022-01-18T08:43:34Z) - Zero-Shot Machine Unlearning [6.884272840652062]
Modern privacy regulations grant citizens the right to be forgotten by products, services and companies.
No data related to the training process or training samples may be accessible for the unlearning purpose.
We propose two novel solutions for zero-shot machine unlearning based on (a) error minimizing-maximizing noise and (b) gated knowledge transfer.
arXiv Detail & Related papers (2022-01-14T19:16:09Z) - SelfCF: A Simple Framework for Self-supervised Collaborative Filtering [72.68215241599509]
Collaborative filtering (CF) is widely used to learn informative latent representations of users and items from observed interactions.
We propose a self-supervised collaborative filtering framework (SelfCF) that is specially designed for recommender scenario with implicit feedback.
We show that SelfCF can boost up the accuracy by up to 17.79% on average, compared with a self-supervised framework BUIR.
arXiv Detail & Related papers (2021-07-07T05:21:12Z) - Decentralized Federated Learning Preserves Model and Data Privacy [77.454688257702]
We propose a fully decentralized approach, which allows to share knowledge between trained models.
Students are trained on the output of their teachers via synthetically generated input data.
The results show that an untrained student model, trained on the teachers output reaches comparable F1-scores as the teacher.
arXiv Detail & Related papers (2021-02-01T14:38:54Z) - MUSCLE: Strengthening Semi-Supervised Learning Via Concurrent
Unsupervised Learning Using Mutual Information Maximization [29.368950377171995]
We introduce Mutual-information-based Unsupervised & Semi-supervised Concurrent LEarning (MUSCLE) to combine both unsupervised and semi-supervised learning.
MUSCLE can be used as a stand-alone training scheme for neural networks, and can also be incorporated into other learning approaches.
We show that the proposed hybrid model outperforms state of the art on several standard benchmarks, including CIFAR-10, CIFAR-100, and Mini-Imagenet.
arXiv Detail & Related papers (2020-11-30T23:01:04Z) - Overcoming Data Sparsity in Group Recommendation [52.00998276970403]
Group recommender systems should be able to accurately learn not only users' personal preferences but also preference aggregation strategy.
In this paper, we take Bipartite Graphding Model (BGEM), the self-attention mechanism and Graph Convolutional Networks (GCNs) as basic building blocks to learn group and user representations in a unified way.
arXiv Detail & Related papers (2020-10-02T07:11:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.