Model Sparsity Can Simplify Machine Unlearning
- URL: http://arxiv.org/abs/2304.04934v9
- Date: Sun, 22 Oct 2023 16:23:51 GMT
- Title: Model Sparsity Can Simplify Machine Unlearning
- Authors: Jinghan Jia, Jiancheng Liu, Parikshit Ram, Yuguang Yao, Gaowen Liu,
Yang Liu, Pranay Sharma, Sijia Liu
- Abstract summary: In response to recent data regulation requirements, machine unlearning (MU) has emerged as a critical process.
Our study introduces a novel model-based perspective: model sparsification via weight pruning.
We show in both theory and practice that model sparsity can boost the multi-criteria unlearning performance of an approximate unlearner.
- Score: 33.18951938708467
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In response to recent data regulation requirements, machine unlearning (MU)
has emerged as a critical process to remove the influence of specific examples
from a given model. Although exact unlearning can be achieved through complete
model retraining using the remaining dataset, the associated computational
costs have driven the development of efficient, approximate unlearning
techniques. Moving beyond data-centric MU approaches, our study introduces a
novel model-based perspective: model sparsification via weight pruning, which
is capable of reducing the gap between exact unlearning and approximate
unlearning. We show in both theory and practice that model sparsity can boost
the multi-criteria unlearning performance of an approximate unlearner, closing
the approximation gap, while continuing to be efficient. This leads to a new MU
paradigm, termed prune first, then unlearn, which infuses a sparse model prior
into the unlearning process. Building on this insight, we also develop a
sparsity-aware unlearning method that utilizes sparsity regularization to
enhance the training process of approximate unlearning. Extensive experiments
show that our proposals consistently benefit MU in various unlearning
scenarios. A notable highlight is the 77% unlearning efficacy gain of
fine-tuning (one of the simplest unlearning methods) when using sparsity-aware
unlearning. Furthermore, we demonstrate the practical impact of our proposed MU
methods in addressing other machine learning challenges, such as defending
against backdoor attacks and enhancing transfer learning. Codes are available
at https://github.com/OPTML-Group/Unlearn-Sparse.
Related papers
- Attribute-to-Delete: Machine Unlearning via Datamodel Matching [65.13151619119782]
Machine unlearning -- efficiently removing a small "forget set" training data on a pre-divertrained machine learning model -- has recently attracted interest.
Recent research shows that machine unlearning techniques do not hold up in such a challenging setting.
arXiv Detail & Related papers (2024-10-30T17:20:10Z) - Learn while Unlearn: An Iterative Unlearning Framework for Generative Language Models [49.043599241803825]
Iterative Contrastive Unlearning (ICU) framework consists of three core components.
A Knowledge Unlearning Induction module removes specific knowledge through an unlearning loss.
A Contrastive Learning Enhancement module to preserve the model's expressive capabilities against the pure unlearning goal.
And an Iterative Unlearning Refinement module that dynamically assess the unlearning extent on specific data pieces and make iterative update.
arXiv Detail & Related papers (2024-07-25T07:09:35Z) - Unlearning with Control: Assessing Real-world Utility for Large Language Model Unlearning [97.2995389188179]
Recent research has begun to approach large language models (LLMs) unlearning via gradient ascent (GA)
Despite their simplicity and efficiency, we suggest that GA-based methods face the propensity towards excessive unlearning.
We propose several controlling methods that can regulate the extent of excessive unlearning.
arXiv Detail & Related papers (2024-06-13T14:41:00Z) - Alignment Calibration: Machine Unlearning for Contrastive Learning under Auditing [33.418062986773606]
We first propose the framework of Machine Unlearning for Contrastive learning (MUC) and adapting existing methods.
We observe that several methods are mediocre unlearners and existing auditing tools may not be sufficient for data owners to validate the unlearning effects in contrastive learning.
We propose a novel method called Alignment (AC) by explicitly considering the properties of contrastive learning and optimizing towards novel metrics to easily verify unlearning.
arXiv Detail & Related papers (2024-06-05T19:55:45Z) - Machine Unlearning in Contrastive Learning [3.6218162133579694]
We introduce a novel gradient constraint-based approach for training the model to effectively achieve machine unlearning.
Our approach demonstrates proficient performance not only on contrastive learning models but also on supervised learning models.
arXiv Detail & Related papers (2024-05-12T16:09:01Z) - An Information Theoretic Approach to Machine Unlearning [45.600917449314444]
Key challenge in unlearning is forgetting the necessary data in a timely manner, while preserving model performance.
In this work, we address the zero-shot unlearning scenario, whereby an unlearning algorithm must be able to remove data given only a trained model and the data to be forgotten.
We derive a simple but principled zero-shot unlearning method based on the geometry of the model.
arXiv Detail & Related papers (2024-02-02T13:33:30Z) - Unlearnable Algorithms for In-context Learning [36.895152458323764]
In this paper, we focus on efficient unlearning methods for the task adaptation phase of a pretrained large language model.
We observe that an LLM's ability to do in-context learning for task adaptation allows for efficient exact unlearning of task adaptation training data.
We propose a new holistic measure of unlearning cost which accounts for varying inference costs.
arXiv Detail & Related papers (2024-02-01T16:43:04Z) - Learn to Unlearn for Deep Neural Networks: Minimizing Unlearning
Interference with Gradient Projection [56.292071534857946]
Recent data-privacy laws have sparked interest in machine unlearning.
Challenge is to discard information about the forget'' data without altering knowledge about remaining dataset.
We adopt a projected-gradient based learning method, named as Projected-Gradient Unlearning (PGU)
We provide empirically evidence to demonstrate that our unlearning method can produce models that behave similar to models retrained from scratch across various metrics even when the training dataset is no longer accessible.
arXiv Detail & Related papers (2023-12-07T07:17:24Z) - Transfer Learning without Knowing: Reprogramming Black-box Machine
Learning Models with Scarce Data and Limited Resources [78.72922528736011]
We propose a novel approach, black-box adversarial reprogramming (BAR), that repurposes a well-trained black-box machine learning model.
Using zeroth order optimization and multi-label mapping techniques, BAR can reprogram a black-box ML model solely based on its input-output responses.
BAR outperforms state-of-the-art methods and yields comparable performance to the vanilla adversarial reprogramming method.
arXiv Detail & Related papers (2020-07-17T01:52:34Z) - Prototypical Contrastive Learning of Unsupervised Representations [171.3046900127166]
Prototypical Contrastive Learning (PCL) is an unsupervised representation learning method.
PCL implicitly encodes semantic structures of the data into the learned embedding space.
PCL outperforms state-of-the-art instance-wise contrastive learning methods on multiple benchmarks.
arXiv Detail & Related papers (2020-05-11T09:53:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.