Machine Unlearning Method Based On Projection Residual
- URL: http://arxiv.org/abs/2209.15276v1
- Date: Fri, 30 Sep 2022 07:29:55 GMT
- Title: Machine Unlearning Method Based On Projection Residual
- Authors: Zihao Cao, Jianzong Wang, Shijing Si, Zhangcheng Huang, Jing Xiao
- Abstract summary: This paper adopts the projection residual method based on Newton method.
The main purpose is to implement machine unlearning tasks in the context of linear regression models and neural network models.
Experiments show that this method is more thorough in deleting data, which is close to model retraining.
- Score: 23.24026891609028
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning models (mainly neural networks) are used more and more in
real life. Users feed their data to the model for training. But these processes
are often one-way. Once trained, the model remembers the data. Even when data
is removed from the dataset, the effects of these data persist in the model.
With more and more laws and regulations around the world protecting data
privacy, it becomes even more important to make models forget this data
completely through machine unlearning.
This paper adopts the projection residual method based on Newton iteration
method. The main purpose is to implement machine unlearning tasks in the
context of linear regression models and neural network models. This method
mainly uses the iterative weighting method to completely forget the data and
its corresponding influence, and its computational cost is linear in the
feature dimension of the data. This method can improve the current machine
learning method. At the same time, it is independent of the size of the
training set. Results were evaluated by feature injection testing (FIT).
Experiments show that this method is more thorough in deleting data, which is
close to model retraining.
Related papers
- Machine Unlearning on Pre-trained Models by Residual Feature Alignment Using LoRA [15.542668474378633]
We propose a novel and efficient machine unlearning method on pre-trained models.
We leverage LoRA to decompose the model's intermediate features into pre-trained features and residual features.
The method aims to learn the zero residuals on the retained set and shifted residuals on the unlearning set.
arXiv Detail & Related papers (2024-11-13T08:56:35Z) - Attribute-to-Delete: Machine Unlearning via Datamodel Matching [65.13151619119782]
Machine unlearning -- efficiently removing a small "forget set" training data on a pre-divertrained machine learning model -- has recently attracted interest.
Recent research shows that machine unlearning techniques do not hold up in such a challenging setting.
arXiv Detail & Related papers (2024-10-30T17:20:10Z) - Learn to Unlearn for Deep Neural Networks: Minimizing Unlearning
Interference with Gradient Projection [56.292071534857946]
Recent data-privacy laws have sparked interest in machine unlearning.
Challenge is to discard information about the forget'' data without altering knowledge about remaining dataset.
We adopt a projected-gradient based learning method, named as Projected-Gradient Unlearning (PGU)
We provide empirically evidence to demonstrate that our unlearning method can produce models that behave similar to models retrained from scratch across various metrics even when the training dataset is no longer accessible.
arXiv Detail & Related papers (2023-12-07T07:17:24Z) - Machine Unlearning Methodology base on Stochastic Teacher Network [33.763901254862766]
"Right to be forgotten" grants data owners the right to actively withdraw data that has been used for model training.
Existing machine unlearning methods have been found to be ineffective in quickly removing knowledge from deep learning models.
This paper proposes using a network as a teacher to expedite the mitigation of the influence caused by forgotten data on the model.
arXiv Detail & Related papers (2023-08-28T06:05:23Z) - Machine Unlearning for Causal Inference [0.6621714555125157]
It is important to enable the model to forget some of its learning/captured information about a given user (machine unlearning)
This paper introduces the concept of machine unlearning for causal inference, particularly propensity score matching and treatment effect estimation.
The dataset used in the study is the Lalonde dataset, a widely used dataset for evaluating the effectiveness of job training programs.
arXiv Detail & Related papers (2023-08-24T17:27:01Z) - AI Model Disgorgement: Methods and Choices [127.54319351058167]
We introduce a taxonomy of possible disgorgement methods that are applicable to modern machine learning systems.
We investigate the meaning of "removing the effects" of data in the trained model in a way that does not require retraining from scratch.
arXiv Detail & Related papers (2023-04-07T08:50:18Z) - Deep Regression Unlearning [6.884272840652062]
We introduce deep regression unlearning methods that generalize well and are robust to privacy attacks.
We conduct regression unlearning experiments for computer vision, natural language processing and forecasting applications.
arXiv Detail & Related papers (2022-10-15T05:00:20Z) - Machine Unlearning of Features and Labels [72.81914952849334]
We propose first scenarios for unlearning and labels in machine learning models.
Our approach builds on the concept of influence functions and realizes unlearning through closed-form updates of model parameters.
arXiv Detail & Related papers (2021-08-26T04:42:24Z) - SSSE: Efficiently Erasing Samples from Trained Machine Learning Models [103.43466657962242]
We propose an efficient and effective algorithm, SSSE, for samples erasure.
In certain cases SSSE can erase samples almost as well as the optimal, yet impractical, gold standard of training a new model from scratch with only the permitted data.
arXiv Detail & Related papers (2021-07-08T14:17:24Z) - Approximate Data Deletion from Machine Learning Models [31.689174311625084]
Deleting data from a trained machine learning (ML) model is a critical task in many applications.
We propose a new approximate deletion method for linear and logistic models.
We also develop a new feature-injection test to evaluate the thoroughness of data deletion from ML models.
arXiv Detail & Related papers (2020-02-24T05:12:03Z) - Certified Data Removal from Machine Learning Models [79.91502073022602]
Good data stewardship requires removal of data at the request of the data's owner.
This raises the question if and how a trained machine-learning model, which implicitly stores information about its training data, should be affected by such a removal request.
We study this problem by defining certified removal: a very strong theoretical guarantee that a model from which data is removed cannot be distinguished from a model that never observed the data to begin with.
arXiv Detail & Related papers (2019-11-08T03:57:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.