Fast Yet Effective Machine Unlearning
- URL: http://arxiv.org/abs/2111.08947v5
- Date: Wed, 31 May 2023 17:42:15 GMT
- Title: Fast Yet Effective Machine Unlearning
- Authors: Ayush K Tarun, Vikram S Chundawat, Murari Mandal, Mohan Kankanhalli
- Abstract summary: We introduce a novel machine unlearning framework with error-maximizing noise generation and impair-repair based weight manipulation.
We show excellent unlearning while substantially retaining the overall model accuracy.
This work is an important step towards fast and easy implementation of unlearning in deep networks.
- Score: 6.884272840652062
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Unlearning the data observed during the training of a machine learning (ML)
model is an important task that can play a pivotal role in fortifying the
privacy and security of ML-based applications. This paper raises the following
questions: (i) can we unlearn a single or multiple class(es) of data from a ML
model without looking at the full training data even once? (ii) can we make the
process of unlearning fast and scalable to large datasets, and generalize it to
different deep networks? We introduce a novel machine unlearning framework with
error-maximizing noise generation and impair-repair based weight manipulation
that offers an efficient solution to the above questions. An error-maximizing
noise matrix is learned for the class to be unlearned using the original model.
The noise matrix is used to manipulate the model weights to unlearn the
targeted class of data. We introduce impair and repair steps for a controlled
manipulation of the network weights. In the impair step, the noise matrix along
with a very high learning rate is used to induce sharp unlearning in the model.
Thereafter, the repair step is used to regain the overall performance. With
very few update steps, we show excellent unlearning while substantially
retaining the overall model accuracy. Unlearning multiple classes requires a
similar number of update steps as for a single class, making our approach
scalable to large problems. Our method is quite efficient in comparison to the
existing methods, works for multi-class unlearning, does not put any
constraints on the original optimization mechanism or network design, and works
well in both small and large-scale vision tasks. This work is an important step
towards fast and easy implementation of unlearning in deep networks. Source
code: https://github.com/vikram2000b/Fast-Machine-Unlearning
Related papers
- LoRA Unlearns More and Retains More (Student Abstract) [0.0]
PruneLoRA reduces the need for large-scale parameter updates by applying low-rank updates to the model.
We leverage LoRA to selectively modify a subset of the pruned model's parameters, thereby reducing the computational cost, memory requirements and improving the model's ability to retain performance on the remaining classes.
arXiv Detail & Related papers (2024-11-16T16:47:57Z) - Machine Unlearning on Pre-trained Models by Residual Feature Alignment Using LoRA [15.542668474378633]
We propose a novel and efficient machine unlearning method on pre-trained models.
We leverage LoRA to decompose the model's intermediate features into pre-trained features and residual features.
The method aims to learn the zero residuals on the retained set and shifted residuals on the unlearning set.
arXiv Detail & Related papers (2024-11-13T08:56:35Z) - Attribute-to-Delete: Machine Unlearning via Datamodel Matching [65.13151619119782]
Machine unlearning -- efficiently removing a small "forget set" training data on a pre-divertrained machine learning model -- has recently attracted interest.
Recent research shows that machine unlearning techniques do not hold up in such a challenging setting.
arXiv Detail & Related papers (2024-10-30T17:20:10Z) - Learn to Unlearn for Deep Neural Networks: Minimizing Unlearning
Interference with Gradient Projection [56.292071534857946]
Recent data-privacy laws have sparked interest in machine unlearning.
Challenge is to discard information about the forget'' data without altering knowledge about remaining dataset.
We adopt a projected-gradient based learning method, named as Projected-Gradient Unlearning (PGU)
We provide empirically evidence to demonstrate that our unlearning method can produce models that behave similar to models retrained from scratch across various metrics even when the training dataset is no longer accessible.
arXiv Detail & Related papers (2023-12-07T07:17:24Z) - Unlearn What You Want to Forget: Efficient Unlearning for LLMs [92.51670143929056]
Large language models (LLMs) have achieved significant progress from pre-training on and memorizing a wide range of textual data.
This process might suffer from privacy issues and violations of data protection regulations.
We propose an efficient unlearning framework that could efficiently update LLMs without having to retrain the whole model after data removals.
arXiv Detail & Related papers (2023-10-31T03:35:59Z) - Many or Few Samples? Comparing Transfer, Contrastive and Meta-Learning
in Encrypted Traffic Classification [68.19713459228369]
We compare transfer learning, meta-learning and contrastive learning against reference Machine Learning (ML) tree-based and monolithic DL models.
We show that (i) using large datasets we can obtain more general representations, (ii) contrastive learning is the best methodology.
While ML tree-based cannot handle large tasks but fits well small tasks, by means of reusing learned representations, DL methods are reaching tree-based models performance also for small tasks.
arXiv Detail & Related papers (2023-05-21T11:20:49Z) - Machine Unlearning of Features and Labels [72.81914952849334]
We propose first scenarios for unlearning and labels in machine learning models.
Our approach builds on the concept of influence functions and realizes unlearning through closed-form updates of model parameters.
arXiv Detail & Related papers (2021-08-26T04:42:24Z) - Certifiable Machine Unlearning for Linear Models [1.484852576248587]
Machine unlearning is the task of updating machine learning (ML) models after a subset of the training data they were trained on is deleted.
We present an experimental study of the three state-of-the-art approximate unlearning methods for linear models.
arXiv Detail & Related papers (2021-06-29T05:05:58Z) - Adversarial Training of Variational Auto-encoders for Continual
Zero-shot Learning [1.90365714903665]
We present a hybrid network that consists of a shared VAE module to hold information of all tasks and task-specific private VAE modules for each task.
The model's size grows with each task to prevent catastrophic forgetting of task-specific skills.
We show our method is superior on class sequentially learning with ZSL(Zero-Shot Learning) and GZSL(Generalized Zero-Shot Learning)
arXiv Detail & Related papers (2021-02-07T11:21:24Z) - Transfer Learning without Knowing: Reprogramming Black-box Machine
Learning Models with Scarce Data and Limited Resources [78.72922528736011]
We propose a novel approach, black-box adversarial reprogramming (BAR), that repurposes a well-trained black-box machine learning model.
Using zeroth order optimization and multi-label mapping techniques, BAR can reprogram a black-box ML model solely based on its input-output responses.
BAR outperforms state-of-the-art methods and yields comparable performance to the vanilla adversarial reprogramming method.
arXiv Detail & Related papers (2020-07-17T01:52:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.