LegoNet: A Fast and Exact Unlearning Architecture
- URL: http://arxiv.org/abs/2210.16023v1
- Date: Fri, 28 Oct 2022 09:53:05 GMT
- Title: LegoNet: A Fast and Exact Unlearning Architecture
- Authors: Sihao Yu, Fei Sun, Jiafeng Guo, Ruqing Zhang, Xueqi Cheng
- Abstract summary: Machine unlearning aims to erase the impact of specific training samples upon deleted requests from a trained model.
We present a novel network, namely textitLegoNet, which adopts the framework of fixed encoder + multiple adapters''
We show that LegoNet accomplishes fast and exact unlearning while maintaining acceptable performance, synthetically outperforming unlearning baselines.
- Score: 59.49058450583149
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Machine unlearning aims to erase the impact of specific training samples upon
deleted requests from a trained model. Re-training the model on the retained
data after deletion is an effective but not efficient way due to the huge
number of model parameters and re-training samples. To speed up, a natural way
is to reduce such parameters and samples. However, such a strategy typically
leads to a loss in model performance, which poses the challenge that increasing
the unlearning efficiency while maintaining acceptable performance. In this
paper, we present a novel network, namely \textit{LegoNet}, which adopts the
framework of ``fixed encoder + multiple adapters''. We fix the encoder~(\ie the
backbone for representation learning) of LegoNet to reduce the parameters that
need to be re-trained during unlearning. Since the encoder occupies a major
part of the model parameters, the unlearning efficiency is significantly
improved. However, fixing the encoder empirically leads to a significant
performance drop. To compensate for the performance loss, we adopt the ensemble
of multiple adapters, which are independent sub-models adopted to infer the
prediction by the encoding~(\ie the output of the encoder). Furthermore, we
design an activation mechanism for the adapters to further trade off unlearning
efficiency against model performance. This mechanism guarantees that each
sample can only impact very few adapters, so during unlearning, parameters and
samples that need to be re-trained are both reduced. The empirical experiments
verify that LegoNet accomplishes fast and exact unlearning while maintaining
acceptable performance, synthetically outperforming unlearning baselines.
Related papers
- SINDER: Repairing the Singular Defects of DINOv2 [61.98878352956125]
Vision Transformer models trained on large-scale datasets often exhibit artifacts in the patch token they extract.
We propose a novel fine-tuning smooth regularization that rectifies structural deficiencies using only a small dataset.
arXiv Detail & Related papers (2024-07-23T20:34:23Z) - Towards Scalable Exact Machine Unlearning Using Parameter-Efficient Fine-Tuning [35.681853074122735]
We introduce Sequence-aware Sharded Sliced Training (S3T) to enhance the deletion capabilities of an exact unlearning system.
S3T enables parameter isolation by sequentially training layers with disjoint data slices.
We train the model on multiple data sequences, which allows S3T to handle an increased number of deletion requests.
arXiv Detail & Related papers (2024-06-24T01:45:13Z) - Optimizing Dense Feed-Forward Neural Networks [0.0]
We propose a novel feed-forward neural network constructing method based on pruning and transfer learning.
Our approach can compress the number of parameters by more than 70%.
We also evaluate the transfer learning level comparing the refined model and the original one training from scratch a neural network.
arXiv Detail & Related papers (2023-12-16T23:23:16Z) - Uncertainty-aware Parameter-Efficient Self-training for Semi-supervised
Language Understanding [38.11411155621616]
We study self-training as one of the predominant semi-supervised learning approaches.
We present UPET, a novel Uncertainty-aware self-Training framework.
We show that UPET achieves a substantial improvement in terms of performance and efficiency.
arXiv Detail & Related papers (2023-10-19T02:18:29Z) - Analyzing the Performance of Deep Encoder-Decoder Networks as Surrogates
for a Diffusion Equation [0.0]
We study the use of encoder-decoder convolutional neural network (CNN) as surrogates for steady-state diffusion solvers.
Our results indicate that increasing the size of the training set has a substantial effect on reducing performance fluctuations and overall error.
arXiv Detail & Related papers (2023-02-07T22:53:19Z) - Slimmable Networks for Contrastive Self-supervised Learning [67.21528544724546]
Self-supervised learning makes significant progress in pre-training large models, but struggles with small models.
We present a one-stage solution to obtain pre-trained small models without the need for extra teachers.
A slimmable network consists of a full network and several weight-sharing sub-networks, which can be pre-trained once to obtain various networks.
arXiv Detail & Related papers (2022-09-30T15:15:05Z) - Machine Unlearning of Features and Labels [72.81914952849334]
We propose first scenarios for unlearning and labels in machine learning models.
Our approach builds on the concept of influence functions and realizes unlearning through closed-form updates of model parameters.
arXiv Detail & Related papers (2021-08-26T04:42:24Z) - Self-Damaging Contrastive Learning [92.34124578823977]
Unlabeled data in reality is commonly imbalanced and shows a long-tail distribution.
This paper proposes a principled framework called Self-Damaging Contrastive Learning to automatically balance the representation learning without knowing the classes.
Our experiments show that SDCLR significantly improves not only overall accuracies but also balancedness.
arXiv Detail & Related papers (2021-06-06T00:04:49Z) - Coded Machine Unlearning [34.08435990347253]
We present a coded learning protocol where the dataset is linearly coded before the learning phase.
We also present the corresponding unlearning protocol for the coded learning model along with a discussion on the proposed protocol's success in ensuring perfect unlearning.
arXiv Detail & Related papers (2020-12-31T17:20:34Z) - Parameter-Efficient Transfer from Sequential Behaviors for User Modeling
and Recommendation [111.44445634272235]
In this paper, we develop a parameter efficient transfer learning architecture, termed as PeterRec.
PeterRec allows the pre-trained parameters to remain unaltered during fine-tuning by injecting a series of re-learned neural networks.
We perform extensive experimental ablation to show the effectiveness of the learned user representation in five downstream tasks.
arXiv Detail & Related papers (2020-01-13T14:09:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.