Netflix and Forget: Efficient and Exact Machine Unlearning from
Bi-linear Recommendations
- URL: http://arxiv.org/abs/2302.06676v1
- Date: Mon, 13 Feb 2023 20:27:45 GMT
- Title: Netflix and Forget: Efficient and Exact Machine Unlearning from
Bi-linear Recommendations
- Authors: Mimee Xu, Jiankai Sun, Xin Yang, Kevin Yao, Chong Wang
- Abstract summary: This paper focuses on simple but widely deployed bi-linear models for recommendations based on matrix completion.
We develop Unlearn-ALS by making a few key modifications to the fine-tuning procedure under Alternating Least Squares.
We show that Unlearn-ALS is consistent with retraining without emphany model degradation and exhibits rapid convergence.
- Score: 15.789980605221672
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: People break up, miscarry, and lose loved ones. Their online streaming and
shopping recommendations, however, do not necessarily update, and may serve as
unhappy reminders of their loss. When users want to renege on their past
actions, they expect the recommender platforms to erase selective data at the
model level. Ideally, given any specified user history, the recommender can
unwind or "forget", as if the record was not part of training. To that end,
this paper focuses on simple but widely deployed bi-linear models for
recommendations based on matrix completion. Without incurring the cost of
re-training, and without degrading the model unnecessarily, we develop
Unlearn-ALS by making a few key modifications to the fine-tuning procedure
under Alternating Least Squares optimisation, thus applicable to any bi-linear
models regardless of the training procedure. We show that Unlearn-ALS is
consistent with retraining without \emph{any} model degradation and exhibits
rapid convergence, making it suitable for a large class of existing
recommenders.
Related papers
- Attribute-to-Delete: Machine Unlearning via Datamodel Matching [65.13151619119782]
Machine unlearning -- efficiently removing a small "forget set" training data on a pre-divertrained machine learning model -- has recently attracted interest.
Recent research shows that machine unlearning techniques do not hold up in such a challenging setting.
arXiv Detail & Related papers (2024-10-30T17:20:10Z) - Alternate Preference Optimization for Unlearning Factual Knowledge in Large Language Models [2.0962367975513496]
Machine unlearning aims to efficiently eliminate the influence of specific training data, known as the forget set, from the model.
Existing unlearning methods rely solely on negative feedback to suppress responses related to the forget set.
We propose a novel approach called Alternate Preference Optimization (AltPO), which combines negative feedback with in-domain positive feedback on the forget set.
arXiv Detail & Related papers (2024-09-20T13:05:07Z) - Personalized Negative Reservoir for Incremental Learning in Recommender
Systems [22.227137206517142]
Recommender systems have become an integral part of online platforms.
Every day the volume of training data is expanding and the number of user interactions is constantly increasing.
The exploration of larger and more expressive models has become a necessary pursuit to improve user experience.
arXiv Detail & Related papers (2024-03-06T19:08:28Z) - Clarify: Improving Model Robustness With Natural Language Corrections [59.041682704894555]
The standard way to teach models is by feeding them lots of data.
This approach often teaches models incorrect ideas because they pick up on misleading signals in the data.
We propose Clarify, a novel interface and method for interactively correcting model misconceptions.
arXiv Detail & Related papers (2024-02-06T05:11:38Z) - Learn to Unlearn for Deep Neural Networks: Minimizing Unlearning
Interference with Gradient Projection [56.292071534857946]
Recent data-privacy laws have sparked interest in machine unlearning.
Challenge is to discard information about the forget'' data without altering knowledge about remaining dataset.
We adopt a projected-gradient based learning method, named as Projected-Gradient Unlearning (PGU)
We provide empirically evidence to demonstrate that our unlearning method can produce models that behave similar to models retrained from scratch across various metrics even when the training dataset is no longer accessible.
arXiv Detail & Related papers (2023-12-07T07:17:24Z) - Effective and Efficient Training for Sequential Recommendation using
Recency Sampling [91.02268704681124]
We propose a novel Recency-based Sampling of Sequences training objective.
We show that the models enhanced with our method can achieve performances exceeding or very close to stateof-the-art BERT4Rec.
arXiv Detail & Related papers (2022-07-06T13:06:31Z) - Implicit Parameter-free Online Learning with Truncated Linear Models [51.71216912089413]
parameter-free algorithms are online learning algorithms that do not require setting learning rates.
We propose new parameter-free algorithms that can take advantage of truncated linear models through a new update that has an "implicit" flavor.
Based on a novel decomposition of the regret, the new update is efficient, requires only one gradient at each step, never overshoots the minimum of the truncated model, and retains the favorable parameter-free properties.
arXiv Detail & Related papers (2022-03-19T13:39:49Z) - Machine Unlearning of Features and Labels [72.81914952849334]
We propose first scenarios for unlearning and labels in machine learning models.
Our approach builds on the concept of influence functions and realizes unlearning through closed-form updates of model parameters.
arXiv Detail & Related papers (2021-08-26T04:42:24Z) - Incremental Learning for Personalized Recommender Systems [8.020546404087922]
We present an incremental learning solution to provide both the training efficiency and the model quality.
The solution is deployed in LinkedIn and directly applicable to industrial scale recommender systems.
arXiv Detail & Related papers (2021-08-13T04:21:21Z) - ADER: Adaptively Distilled Exemplar Replay Towards Continual Learning
for Session-based Recommendation [28.22402119581332]
Session-based recommendation has received growing attention recently due to the increasing privacy concern.
We propose a method called Adaptively Distilled Exemplar Replay (ADER) by periodically replaying previous training samples.
ADER consistently outperforms other baselines, and it even outperforms the method using all historical data at every update cycle.
arXiv Detail & Related papers (2020-07-23T13:19:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.