Layer Attack Unlearning: Fast and Accurate Machine Unlearning via Layer
Level Attack and Knowledge Distillation
- URL: http://arxiv.org/abs/2312.16823v1
- Date: Thu, 28 Dec 2023 04:38:06 GMT
- Title: Layer Attack Unlearning: Fast and Accurate Machine Unlearning via Layer
Level Attack and Knowledge Distillation
- Authors: Hyunjune Kim, Sangyong Lee, Simon S. Woo
- Abstract summary: We propose a fast and novel machine unlearning paradigm at the layer level called layer attack unlearning.
In this work, we introduce the Partial-PGD algorithm to locate the samples to forget efficiently.
We also use Knowledge Distillation (KD) to reliably learn the decision boundaries from the teacher.
- Score: 21.587358050012032
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, serious concerns have been raised about the privacy issues related
to training datasets in machine learning algorithms when including personal
data. Various regulations in different countries, including the GDPR grant
individuals to have personal data erased, known as 'the right to be forgotten'
or 'the right to erasure'. However, there has been less research on effectively
and practically deleting the requested personal data from the training set
while not jeopardizing the overall machine learning performance. In this work,
we propose a fast and novel machine unlearning paradigm at the layer level
called layer attack unlearning, which is highly accurate and fast compared to
existing machine unlearning algorithms. We introduce the Partial-PGD algorithm
to locate the samples to forget efficiently. In addition, we only use the last
layer of the model inspired by the Forward-Forward algorithm for unlearning
process. Lastly, we use Knowledge Distillation (KD) to reliably learn the
decision boundaries from the teacher using soft label information to improve
accuracy performance. We conducted extensive experiments with SOTA machine
unlearning models and demonstrated the effectiveness of our approach for
accuracy and end-to-end unlearning performance.
Related papers
- Game-Theoretic Machine Unlearning: Mitigating Extra Privacy Leakage [12.737028324709609]
Recent legislation obligates organizations to remove requested data and its influence from a trained model.
We propose a game-theoretic machine unlearning algorithm that simulates the competitive relationship between unlearning performance and privacy protection.
arXiv Detail & Related papers (2024-11-06T13:47:04Z) - Incremental Self-training for Semi-supervised Learning [56.57057576885672]
IST is simple yet effective and fits existing self-training-based semi-supervised learning methods.
We verify the proposed IST on five datasets and two types of backbone, effectively improving the recognition accuracy and learning speed.
arXiv Detail & Related papers (2024-04-14T05:02:00Z) - The Frontier of Data Erasure: Machine Unlearning for Large Language Models [56.26002631481726]
Large Language Models (LLMs) are foundational to AI advancements.
LLMs pose risks by potentially memorizing and disseminating sensitive, biased, or copyrighted information.
Machine unlearning emerges as a cutting-edge solution to mitigate these concerns.
arXiv Detail & Related papers (2024-03-23T09:26:15Z) - Efficient Knowledge Deletion from Trained Models through Layer-wise
Partial Machine Unlearning [2.3496568239538083]
This paper introduces a novel class of machine unlearning algorithms.
First method is partial amnesiac unlearning, integration of layer-wise pruning with amnesiac unlearning.
Second method assimilates layer-wise partial-updates into label-flipping and optimization-based unlearning.
arXiv Detail & Related papers (2024-03-12T12:49:47Z) - Unlearn What You Want to Forget: Efficient Unlearning for LLMs [92.51670143929056]
Large language models (LLMs) have achieved significant progress from pre-training on and memorizing a wide range of textual data.
This process might suffer from privacy issues and violations of data protection regulations.
We propose an efficient unlearning framework that could efficiently update LLMs without having to retrain the whole model after data removals.
arXiv Detail & Related papers (2023-10-31T03:35:59Z) - PILOT: A Pre-Trained Model-Based Continual Learning Toolbox [71.63186089279218]
This paper introduces a pre-trained model-based continual learning toolbox known as PILOT.
On the one hand, PILOT implements some state-of-the-art class-incremental learning algorithms based on pre-trained models, such as L2P, DualPrompt, and CODA-Prompt.
On the other hand, PILOT fits typical class-incremental learning algorithms within the context of pre-trained models to evaluate their effectiveness.
arXiv Detail & Related papers (2023-09-13T17:55:11Z) - Generative Adversarial Networks Unlearning [13.342749941357152]
Machine unlearning has emerged as a solution to erase training data from trained machine learning models.
Research on Generative Adversarial Networks (GANs) is limited due to their unique architecture, including a generator and a discriminator.
We propose a cascaded unlearning approach for both item and class unlearning within GAN models, in which the unlearning and learning processes run in a cascaded manner.
arXiv Detail & Related papers (2023-08-19T02:21:21Z) - Random Relabeling for Efficient Machine Unlearning [8.871042314510788]
Individuals' right to retract personal data and relevant data privacy regulations pose great challenges to machine learning.
We propose unlearning scheme random relabeling to efficiently deal with sequential data removal requests.
A less constraining removal certification method based on probability distribution similarity with naive unlearning is also proposed.
arXiv Detail & Related papers (2023-05-21T02:37:26Z) - Forget Unlearning: Towards True Data-Deletion in Machine Learning [18.656957502454592]
We show that unlearning is not equivalent to data deletion and does not guarantee the "right to be forgotten"
We propose an accurate, computationally efficient, and secure data-deletion machine learning algorithm in the online setting.
arXiv Detail & Related papers (2022-10-17T10:06:11Z) - Machine Unlearning of Features and Labels [72.81914952849334]
We propose first scenarios for unlearning and labels in machine learning models.
Our approach builds on the concept of influence functions and realizes unlearning through closed-form updates of model parameters.
arXiv Detail & Related papers (2021-08-26T04:42:24Z) - Low-Regret Active learning [64.36270166907788]
We develop an online learning algorithm for identifying unlabeled data points that are most informative for training.
At the core of our work is an efficient algorithm for sleeping experts that is tailored to achieve low regret on predictable (easy) instances.
arXiv Detail & Related papers (2021-04-06T22:53:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.