Lightweight machine unlearning in neural network
- URL: http://arxiv.org/abs/2111.05528v1
- Date: Wed, 10 Nov 2021 04:48:31 GMT
- Title: Lightweight machine unlearning in neural network
- Authors: Kongyang Chen, Yiwen Wang, Yao Huang
- Abstract summary: "Right to be forgotten" was introduced in a timely manner, stipulating that individuals have the right to withdraw their consent based on their consent.
To solve this problem, machine unlearning is proposed, which allows the model to erase all memory of private information.
Our method is 15 times faster than retraining.
- Score: 2.406359246841227
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, machine learning neural network has penetrated deeply into
people's life. As the price of convenience, people's private information also
has the risk of disclosure. The "right to be forgotten" was introduced in a
timely manner, stipulating that individuals have the right to withdraw their
consent from personal information processing activities based on their consent.
To solve this problem, machine unlearning is proposed, which allows the model
to erase all memory of private information. Previous studies, including
retraining and incremental learning to update models, often take up extra
storage space or are difficult to apply to neural networks. Our method only
needs to make a small perturbation of the weight of the target model and make
it iterate in the direction of the model trained with the remaining data subset
until the contribution of the unlearning data to the model is completely
eliminated. In this paper, experiments on five datasets prove the effectiveness
of our method for machine unlearning, and our method is 15 times faster than
retraining.
Related papers
- Machine Unlearning using Forgetting Neural Networks [0.0]
This paper presents a new approach to machine unlearning using forgetting neural networks (FNN)
FNNs are neural networks with specific forgetting layers, that take inspiration from the processes involved when a human brain forgets.
We report our results on the MNIST handwritten digit recognition and fashion datasets.
arXiv Detail & Related papers (2024-10-29T02:52:26Z) - Learn to Unlearn for Deep Neural Networks: Minimizing Unlearning
Interference with Gradient Projection [56.292071534857946]
Recent data-privacy laws have sparked interest in machine unlearning.
Challenge is to discard information about the forget'' data without altering knowledge about remaining dataset.
We adopt a projected-gradient based learning method, named as Projected-Gradient Unlearning (PGU)
We provide empirically evidence to demonstrate that our unlearning method can produce models that behave similar to models retrained from scratch across various metrics even when the training dataset is no longer accessible.
arXiv Detail & Related papers (2023-12-07T07:17:24Z) - Machine Unlearning Methodology base on Stochastic Teacher Network [33.763901254862766]
"Right to be forgotten" grants data owners the right to actively withdraw data that has been used for model training.
Existing machine unlearning methods have been found to be ineffective in quickly removing knowledge from deep learning models.
This paper proposes using a network as a teacher to expedite the mitigation of the influence caused by forgotten data on the model.
arXiv Detail & Related papers (2023-08-28T06:05:23Z) - Learning to Unlearn: Instance-wise Unlearning for Pre-trained
Classifiers [71.70205894168039]
We consider instance-wise unlearning, of which the goal is to delete information on a set of instances from a pre-trained model.
We propose two methods that reduce forgetting on the remaining data: 1) utilizing adversarial examples to overcome forgetting at the representation-level and 2) leveraging weight importance metrics to pinpoint network parameters guilty of propagating unwanted information.
arXiv Detail & Related papers (2023-01-27T07:53:50Z) - Machine Unlearning Method Based On Projection Residual [23.24026891609028]
This paper adopts the projection residual method based on Newton method.
The main purpose is to implement machine unlearning tasks in the context of linear regression models and neural network models.
Experiments show that this method is more thorough in deleting data, which is close to model retraining.
arXiv Detail & Related papers (2022-09-30T07:29:55Z) - A Survey of Machine Unlearning [56.017968863854186]
Recent regulations now require that, on request, private information about a user must be removed from computer systems.
ML models often remember' the old data.
Recent works on machine unlearning have not been able to completely solve the problem.
arXiv Detail & Related papers (2022-09-06T08:51:53Z) - Explain to Not Forget: Defending Against Catastrophic Forgetting with
XAI [10.374979214803805]
Catastrophic forgetting describes the phenomenon when a neural network completely forgets previous knowledge when given new information.
We propose a novel training algorithm called training by explaining in which we leverage Layer-wise Relevance propagation in order to retain the information a neural network has already learned in previous tasks when training on new data.
Our method not only successfully retains the knowledge of old tasks within the neural networks but does so more resource-efficiently than other state-of-the-art solutions.
arXiv Detail & Related papers (2022-05-04T08:00:49Z) - Machine Unlearning of Features and Labels [72.81914952849334]
We propose first scenarios for unlearning and labels in machine learning models.
Our approach builds on the concept of influence functions and realizes unlearning through closed-form updates of model parameters.
arXiv Detail & Related papers (2021-08-26T04:42:24Z) - Reasoning-Modulated Representations [85.08205744191078]
We study a common setting where our task is not purely opaque.
Our approach paves the way for a new class of data-efficient representation learning.
arXiv Detail & Related papers (2021-07-19T13:57:13Z) - Learning to Reweight with Deep Interactions [104.68509759134878]
We propose an improved data reweighting algorithm, in which the student model provides its internal states to the teacher model.
Experiments on image classification with clean/noisy labels and neural machine translation empirically demonstrate that our algorithm makes significant improvement over previous methods.
arXiv Detail & Related papers (2020-07-09T09:06:31Z) - Learn to Forget: Machine Unlearning via Neuron Masking [36.072775581268544]
We propose the first uniform metric called for-getting rate to measure the effectiveness of a machine unlearning method.
We also propose a novel unlearning method calledForsaken. It is superior to previous work in either utility or efficiency.
arXiv Detail & Related papers (2020-03-24T15:46:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.