Learn to Forget: Machine Unlearning via Neuron Masking
- URL: http://arxiv.org/abs/2003.10933v3
- Date: Mon, 2 Aug 2021 09:06:41 GMT
- Title: Learn to Forget: Machine Unlearning via Neuron Masking
- Authors: Yang Liu, Zhuo Ma, Ximeng Liu, Jian Liu, Zhongyuan Jiang, Jianfeng Ma,
Philip Yu, Kui Ren
- Abstract summary: We propose the first uniform metric called for-getting rate to measure the effectiveness of a machine unlearning method.
We also propose a novel unlearning method calledForsaken. It is superior to previous work in either utility or efficiency.
- Score: 36.072775581268544
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Nowadays, machine learning models, especially neural networks, become
prevalent in many real-world applications.These models are trained based on a
one-way trip from user data: as long as users contribute their data, there is
no way to withdraw; and it is well-known that a neural network memorizes its
training data. This contradicts the "right to be forgotten" clause of GDPR,
potentially leading to law violations. To this end, machine unlearning becomes
a popular research topic, which allows users to eliminate memorization of their
private data from a trained machine learning model.In this paper, we propose
the first uniform metric called for-getting rate to measure the effectiveness
of a machine unlearning method. It is based on the concept of membership
inference and describes the transformation rate of the eliminated data from
"memorized" to "unknown" after conducting unlearning. We also propose a novel
unlearning method calledForsaken. It is superior to previous work in either
utility or efficiency (when achieving the same forgetting rate). We benchmark
Forsaken with eight standard datasets to evaluate its performance. The
experimental results show that it can achieve more than 90\% forgetting rate on
average and only causeless than 5\% accuracy loss.
Related papers
- Attribute-to-Delete: Machine Unlearning via Datamodel Matching [65.13151619119782]
Machine unlearning -- efficiently removing a small "forget set" training data on a pre-divertrained machine learning model -- has recently attracted interest.
Recent research shows that machine unlearning techniques do not hold up in such a challenging setting.
arXiv Detail & Related papers (2024-10-30T17:20:10Z) - Fast Machine Unlearning Without Retraining Through Selective Synaptic
Dampening [51.34904967046097]
Selective Synaptic Dampening (SSD) is a fast, performant, and does not require long-term storage of the training data.
We present a novel two-step, post hoc, retrain-free approach to machine unlearning which is fast, performant, and does not require long-term storage of the training data.
arXiv Detail & Related papers (2023-08-15T11:30:45Z) - Random Relabeling for Efficient Machine Unlearning [8.871042314510788]
Individuals' right to retract personal data and relevant data privacy regulations pose great challenges to machine learning.
We propose unlearning scheme random relabeling to efficiently deal with sequential data removal requests.
A less constraining removal certification method based on probability distribution similarity with naive unlearning is also proposed.
arXiv Detail & Related papers (2023-05-21T02:37:26Z) - Towards Unbounded Machine Unlearning [13.31957848633701]
We study unlearning for different applications (RB, RC, UP), with the view that each has its own desiderata, definitions for forgetting' and associated metrics for forget quality.
For UP, we propose a novel adaptation of a strong Membership Inference Attack for unlearning.
We also propose SCRUB, a novel unlearning algorithm, which is consistently a top performer for forget quality across the different application-dependent metrics for RB, RC, and UP.
arXiv Detail & Related papers (2023-02-20T10:15:36Z) - A Memory Transformer Network for Incremental Learning [64.0410375349852]
We study class-incremental learning, a training setup in which new classes of data are observed over time for the model to learn from.
Despite the straightforward problem formulation, the naive application of classification models to class-incremental learning results in the "catastrophic forgetting" of previously seen classes.
One of the most successful existing methods has been the use of a memory of exemplars, which overcomes the issue of catastrophic forgetting by saving a subset of past data into a memory bank and utilizing it to prevent forgetting when training future tasks.
arXiv Detail & Related papers (2022-10-10T08:27:28Z) - Machine Unlearning Method Based On Projection Residual [23.24026891609028]
This paper adopts the projection residual method based on Newton method.
The main purpose is to implement machine unlearning tasks in the context of linear regression models and neural network models.
Experiments show that this method is more thorough in deleting data, which is close to model retraining.
arXiv Detail & Related papers (2022-09-30T07:29:55Z) - Evaluating Machine Unlearning via Epistemic Uncertainty [78.27542864367821]
This work presents an evaluation of Machine Unlearning algorithms based on uncertainty.
This is the first definition of a general evaluation of our best knowledge.
arXiv Detail & Related papers (2022-08-23T09:37:31Z) - Can Bad Teaching Induce Forgetting? Unlearning in Deep Networks using an
Incompetent Teacher [6.884272840652062]
We propose a novel machine unlearning method by exploring the utility of competent and incompetent teachers in a student-teacher framework to induce forgetfulness.
The knowledge from the competent and incompetent teachers is selectively transferred to the student to obtain a model that doesn't contain any information about the forget data.
We introduce the zero forgetting (ZRF) metric to evaluate any unlearning method.
arXiv Detail & Related papers (2022-05-17T05:13:17Z) - Lightweight machine unlearning in neural network [2.406359246841227]
"Right to be forgotten" was introduced in a timely manner, stipulating that individuals have the right to withdraw their consent based on their consent.
To solve this problem, machine unlearning is proposed, which allows the model to erase all memory of private information.
Our method is 15 times faster than retraining.
arXiv Detail & Related papers (2021-11-10T04:48:31Z) - Machine Unlearning of Features and Labels [72.81914952849334]
We propose first scenarios for unlearning and labels in machine learning models.
Our approach builds on the concept of influence functions and realizes unlearning through closed-form updates of model parameters.
arXiv Detail & Related papers (2021-08-26T04:42:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.