Netflix and Forget: Efficient and Exact Machine Unlearning from
Bi-linear Recommendations
- URL: http://arxiv.org/abs/2302.06676v1
- Date: Mon, 13 Feb 2023 20:27:45 GMT
- Title: Netflix and Forget: Efficient and Exact Machine Unlearning from
Bi-linear Recommendations
- Authors: Mimee Xu, Jiankai Sun, Xin Yang, Kevin Yao, Chong Wang
- Abstract summary: This paper focuses on simple but widely deployed bi-linear models for recommendations based on matrix completion.
We develop Unlearn-ALS by making a few key modifications to the fine-tuning procedure under Alternating Least Squares.
We show that Unlearn-ALS is consistent with retraining without emphany model degradation and exhibits rapid convergence.
- Score: 15.789980605221672
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: People break up, miscarry, and lose loved ones. Their online streaming and
shopping recommendations, however, do not necessarily update, and may serve as
unhappy reminders of their loss. When users want to renege on their past
actions, they expect the recommender platforms to erase selective data at the
model level. Ideally, given any specified user history, the recommender can
unwind or "forget", as if the record was not part of training. To that end,
this paper focuses on simple but widely deployed bi-linear models for
recommendations based on matrix completion. Without incurring the cost of
re-training, and without degrading the model unnecessarily, we develop
Unlearn-ALS by making a few key modifications to the fine-tuning procedure
under Alternating Least Squares optimisation, thus applicable to any bi-linear
models regardless of the training procedure. We show that Unlearn-ALS is
consistent with retraining without \emph{any} model degradation and exhibits
rapid convergence, making it suitable for a large class of existing
recommenders.
Related papers
- Personalized Negative Reservoir for Incremental Learning in Recommender
Systems [22.227137206517142]
Recommender systems have become an integral part of online platforms.
Every day the volume of training data is expanding and the number of user interactions is constantly increasing.
The exploration of larger and more expressive models has become a necessary pursuit to improve user experience.
arXiv Detail & Related papers (2024-03-06T19:08:28Z) - Learn to Unlearn for Deep Neural Networks: Minimizing Unlearning
Interference with Gradient Projection [56.292071534857946]
Recent data-privacy laws have sparked interest in machine unlearning.
Challenge is to discard information about the forget'' data without altering knowledge about remaining dataset.
We adopt a projected-gradient based learning method, named as Projected-Gradient Unlearning (PGU)
We provide empirically evidence to demonstrate that our unlearning method can produce models that behave similar to models retrained from scratch across various metrics even when the training dataset is no longer accessible.
arXiv Detail & Related papers (2023-12-07T07:17:24Z) - Deep Regression Unlearning [6.884272840652062]
We introduce deep regression unlearning methods that generalize well and are robust to privacy attacks.
We conduct regression unlearning experiments for computer vision, natural language processing and forecasting applications.
arXiv Detail & Related papers (2022-10-15T05:00:20Z) - Effective and Efficient Training for Sequential Recommendation using
Recency Sampling [91.02268704681124]
We propose a novel Recency-based Sampling of Sequences training objective.
We show that the models enhanced with our method can achieve performances exceeding or very close to stateof-the-art BERT4Rec.
arXiv Detail & Related papers (2022-07-06T13:06:31Z) - Implicit Parameter-free Online Learning with Truncated Linear Models [51.71216912089413]
parameter-free algorithms are online learning algorithms that do not require setting learning rates.
We propose new parameter-free algorithms that can take advantage of truncated linear models through a new update that has an "implicit" flavor.
Based on a novel decomposition of the regret, the new update is efficient, requires only one gradient at each step, never overshoots the minimum of the truncated model, and retains the favorable parameter-free properties.
arXiv Detail & Related papers (2022-03-19T13:39:49Z) - Machine Unlearning of Features and Labels [72.81914952849334]
We propose first scenarios for unlearning and labels in machine learning models.
Our approach builds on the concept of influence functions and realizes unlearning through closed-form updates of model parameters.
arXiv Detail & Related papers (2021-08-26T04:42:24Z) - Incremental Learning for Personalized Recommender Systems [8.020546404087922]
We present an incremental learning solution to provide both the training efficiency and the model quality.
The solution is deployed in LinkedIn and directly applicable to industrial scale recommender systems.
arXiv Detail & Related papers (2021-08-13T04:21:21Z) - SSSE: Efficiently Erasing Samples from Trained Machine Learning Models [103.43466657962242]
We propose an efficient and effective algorithm, SSSE, for samples erasure.
In certain cases SSSE can erase samples almost as well as the optimal, yet impractical, gold standard of training a new model from scratch with only the permitted data.
arXiv Detail & Related papers (2021-07-08T14:17:24Z) - S^3-Rec: Self-Supervised Learning for Sequential Recommendation with
Mutual Information Maximization [104.87483578308526]
We propose the model S3-Rec, which stands for Self-Supervised learning for Sequential Recommendation.
For our task, we devise four auxiliary self-supervised objectives to learn the correlations among attribute, item, subsequence, and sequence.
Extensive experiments conducted on six real-world datasets demonstrate the superiority of our proposed method over existing state-of-the-art methods.
arXiv Detail & Related papers (2020-08-18T11:44:10Z) - ADER: Adaptively Distilled Exemplar Replay Towards Continual Learning
for Session-based Recommendation [28.22402119581332]
Session-based recommendation has received growing attention recently due to the increasing privacy concern.
We propose a method called Adaptively Distilled Exemplar Replay (ADER) by periodically replaying previous training samples.
ADER consistently outperforms other baselines, and it even outperforms the method using all historical data at every update cycle.
arXiv Detail & Related papers (2020-07-23T13:19:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.