Federated Unlearning via Active Forgetting
- URL: http://arxiv.org/abs/2307.03363v1
- Date: Fri, 7 Jul 2023 03:07:26 GMT
- Title: Federated Unlearning via Active Forgetting
- Authors: Yuyuan Li, Chaochao Chen, Xiaolin Zheng, Jiaming Zhang
- Abstract summary: We propose a novel federated unlearning framework based on incremental learning.
Our framework differs from existing federated unlearning methods that rely on approximate retraining or data influence estimation.
- Score: 24.060724751342047
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The increasing concerns regarding the privacy of machine learning models have
catalyzed the exploration of machine unlearning, i.e., a process that removes
the influence of training data on machine learning models. This concern also
arises in the realm of federated learning, prompting researchers to address the
federated unlearning problem. However, federated unlearning remains
challenging. Existing unlearning methods can be broadly categorized into two
approaches, i.e., exact unlearning and approximate unlearning. Firstly,
implementing exact unlearning, which typically relies on the
partition-aggregation framework, in a distributed manner does not improve time
efficiency theoretically. Secondly, existing federated (approximate) unlearning
methods suffer from imprecise data influence estimation, significant
computational burden, or both. To this end, we propose a novel federated
unlearning framework based on incremental learning, which is independent of
specific models and federated settings. Our framework differs from existing
federated unlearning methods that rely on approximate retraining or data
influence estimation. Instead, we leverage new memories to overwrite old ones,
imitating the process of \textit{active forgetting} in neurology. Specifically,
the model, intended to unlearn, serves as a student model that continuously
learns from randomly initiated teacher models. To preserve catastrophic
forgetting of non-target data, we utilize elastic weight consolidation to
elastically constrain weight change. Extensive experiments on three benchmark
datasets demonstrate the efficiency and effectiveness of our proposed method.
The result of backdoor attacks demonstrates that our proposed method achieves
satisfying completeness.
Related papers
- RESTOR: Knowledge Recovery through Machine Unlearning [71.75834077528305]
Large language models trained on web-scale corpora can memorize undesirable datapoints.
Many machine unlearning methods have been proposed that aim to 'erase' these datapoints from trained models.
We propose the RESTOR framework for machine unlearning based on the following dimensions.
arXiv Detail & Related papers (2024-10-31T20:54:35Z) - Learn while Unlearn: An Iterative Unlearning Framework for Generative Language Models [49.043599241803825]
Iterative Contrastive Unlearning (ICU) framework consists of three core components.
A Knowledge Unlearning Induction module removes specific knowledge through an unlearning loss.
A Contrastive Learning Enhancement module to preserve the model's expressive capabilities against the pure unlearning goal.
And an Iterative Unlearning Refinement module that dynamically assess the unlearning extent on specific data pieces and make iterative update.
arXiv Detail & Related papers (2024-07-25T07:09:35Z) - Update Selective Parameters: Federated Machine Unlearning Based on Model Explanation [46.86767774669831]
We propose a more effective and efficient federated unlearning scheme based on the concept of model explanation.
We select the most influential channels within an already-trained model for the data that need to be unlearned.
arXiv Detail & Related papers (2024-06-18T11:43:20Z) - Adversarial Machine Unlearning [26.809123658470693]
This paper focuses on the challenge of machine unlearning, aiming to remove the influence of specific training data on machine learning models.
Traditionally, the development of unlearning algorithms runs parallel with that of membership inference attacks (MIA), a type of privacy threat.
We propose a game-theoretic framework that integrates MIAs into the design of unlearning algorithms.
arXiv Detail & Related papers (2024-06-11T20:07:22Z) - Efficient Knowledge Deletion from Trained Models through Layer-wise
Partial Machine Unlearning [2.3496568239538083]
This paper introduces a novel class of machine unlearning algorithms.
First method is partial amnesiac unlearning, integration of layer-wise pruning with amnesiac unlearning.
Second method assimilates layer-wise partial-updates into label-flipping and optimization-based unlearning.
arXiv Detail & Related papers (2024-03-12T12:49:47Z) - Unlearnable Algorithms for In-context Learning [36.895152458323764]
In this paper, we focus on efficient unlearning methods for the task adaptation phase of a pretrained large language model.
We observe that an LLM's ability to do in-context learning for task adaptation allows for efficient exact unlearning of task adaptation training data.
We propose a new holistic measure of unlearning cost which accounts for varying inference costs.
arXiv Detail & Related papers (2024-02-01T16:43:04Z) - Scalable Federated Unlearning via Isolated and Coded Sharding [76.12847512410767]
Federated unlearning has emerged as a promising paradigm to erase the client-level data effect.
This paper proposes a scalable federated unlearning framework based on isolated sharding and coded computing.
arXiv Detail & Related papers (2024-01-29T08:41:45Z) - Learn to Unlearn for Deep Neural Networks: Minimizing Unlearning
Interference with Gradient Projection [56.292071534857946]
Recent data-privacy laws have sparked interest in machine unlearning.
Challenge is to discard information about the forget'' data without altering knowledge about remaining dataset.
We adopt a projected-gradient based learning method, named as Projected-Gradient Unlearning (PGU)
We provide empirically evidence to demonstrate that our unlearning method can produce models that behave similar to models retrained from scratch across various metrics even when the training dataset is no longer accessible.
arXiv Detail & Related papers (2023-12-07T07:17:24Z) - Model Sparsity Can Simplify Machine Unlearning [33.18951938708467]
In response to recent data regulation requirements, machine unlearning (MU) has emerged as a critical process.
Our study introduces a novel model-based perspective: model sparsification via weight pruning.
We show in both theory and practice that model sparsity can boost the multi-criteria unlearning performance of an approximate unlearner.
arXiv Detail & Related papers (2023-04-11T02:12:02Z) - Machine Unlearning of Features and Labels [72.81914952849334]
We propose first scenarios for unlearning and labels in machine learning models.
Our approach builds on the concept of influence functions and realizes unlearning through closed-form updates of model parameters.
arXiv Detail & Related papers (2021-08-26T04:42:24Z) - Learning to Reweight with Deep Interactions [104.68509759134878]
We propose an improved data reweighting algorithm, in which the student model provides its internal states to the teacher model.
Experiments on image classification with clean/noisy labels and neural machine translation empirically demonstrate that our algorithm makes significant improvement over previous methods.
arXiv Detail & Related papers (2020-07-09T09:06:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.