Federated Unlearning
- URL: http://arxiv.org/abs/2012.13891v2
- Date: Sun, 21 Feb 2021 10:08:04 GMT
- Title: Federated Unlearning
- Authors: Gaoyang Liu, Yang Yang, Xiaoqiang Ma, Chen Wang, Jiangchuan Liu
- Abstract summary: Federated learning (FL) has emerged as a promising distributed machine learning (ML) paradigm.
Practical needs of the "right to be forgotten" and countering data poisoning attacks call for efficient techniques that can remove, or unlearn, specific training data from the trained FL model.
We present FedEraser, the first federated unlearning methodology that can eliminate the influence of a federated client's data on the global FL model.
- Score: 24.60965999954735
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) has recently emerged as a promising distributed
machine learning (ML) paradigm. Practical needs of the "right to be forgotten"
and countering data poisoning attacks call for efficient techniques that can
remove, or unlearn, specific training data from the trained FL model. Existing
unlearning techniques in the context of ML, however, are no longer in effect
for FL, mainly due to the inherent distinction in the way how FL and ML learn
from data. Therefore, how to enable efficient data removal from FL models
remains largely under-explored. In this paper, we take the first step to fill
this gap by presenting FedEraser, the first federated unlearning methodology
that can eliminate the influence of a federated client's data on the global FL
model while significantly reducing the time used for constructing the unlearned
FL model.The basic idea of FedEraser is to trade the central server's storage
for unlearned model's construction time, where FedEraser reconstructs the
unlearned model by leveraging the historical parameter updates of federated
clients that have been retained at the central server during the training
process of FL. A novel calibration method is further developed to calibrate the
retained updates, which are further used to promptly construct the unlearned
model, yielding a significant speed-up to the reconstruction of the unlearned
model while maintaining the model efficacy. Experiments on four realistic
datasets demonstrate the effectiveness of FedEraser, with an expected speed-up
of $4\times$ compared with retraining from the scratch. We envision our work as
an early step in FL towards compliance with legal and ethical criteria in a
fair and transparent manner.
Related papers
- FedQUIT: On-Device Federated Unlearning via a Quasi-Competent Virtual Teacher [4.291269657919828]
Federated Learning (FL) promises better privacy guarantees for individuals' data when machine learning models are collaboratively trained.
When an FL participant exercises its right to be forgotten, i.e., to detach from the FL framework it has participated, the FL solution should perform all the necessary steps.
We propose FedQUIT, a novel algorithm that uses knowledge distillation to scrub the contribution of the forgetting data from an FL global model.
arXiv Detail & Related papers (2024-08-14T14:36:28Z) - Fast-FedUL: A Training-Free Federated Unlearning with Provable Skew Resilience [26.647028483763137]
We introduce Fast-FedUL, a tailored unlearning method for Federated Learning (FL)
We develop an algorithm to systematically remove the impact of the target client from the trained model.
Experimental results indicate that Fast-FedUL effectively removes almost all traces of the target client, while retaining the knowledge of untargeted clients.
arXiv Detail & Related papers (2024-05-28T10:51:38Z) - A Survey on Efficient Federated Learning Methods for Foundation Model Training [62.473245910234304]
Federated Learning (FL) has become an established technique to facilitate privacy-preserving collaborative training across a multitude of clients.
In the wake of Foundation Models (FM), the reality is different for many deep learning applications.
We discuss the benefits and drawbacks of parameter-efficient fine-tuning (PEFT) for FL applications.
arXiv Detail & Related papers (2024-01-09T10:22:23Z) - Data-Agnostic Model Poisoning against Federated Learning: A Graph
Autoencoder Approach [65.2993866461477]
This paper proposes a data-agnostic, model poisoning attack on Federated Learning (FL)
The attack requires no knowledge of FL training data and achieves both effectiveness and undetectability.
Experiments show that the FL accuracy drops gradually under the proposed attack and existing defense mechanisms fail to detect it.
arXiv Detail & Related papers (2023-11-30T12:19:10Z) - On the Importance and Applicability of Pre-Training for Federated
Learning [28.238484580662785]
We conduct a systematic study to explore pre-training for federated learning.
We find that pre-training can improve FL, but also close its accuracy gap to the counterpart centralized learning.
We conclude our paper with an attempt to understand the effect of pre-training on FL.
arXiv Detail & Related papers (2022-06-23T06:02:33Z) - Fine-tuning Global Model via Data-Free Knowledge Distillation for
Non-IID Federated Learning [86.59588262014456]
Federated Learning (FL) is an emerging distributed learning paradigm under privacy constraint.
We propose a data-free knowledge distillation method to fine-tune the global model in the server (FedFTG)
Our FedFTG significantly outperforms the state-of-the-art (SOTA) FL algorithms and can serve as a strong plugin for enhancing FedAvg, FedProx, FedDyn, and SCAFFOLD.
arXiv Detail & Related papers (2022-03-17T11:18:17Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z) - Critical Learning Periods in Federated Learning [11.138980572551066]
Federated learning (FL) is a popular technique to train machine learning (ML) models with decentralized data.
We show that the final test accuracy of FL is dramatically affected by the early phase of the training process.
arXiv Detail & Related papers (2021-09-12T21:06:07Z) - Prototype Guided Federated Learning of Visual Feature Representations [15.021124010665194]
Federated Learning (FL) is a framework which enables distributed model training using a large corpus of decentralized training data.
Existing methods aggregate models disregarding their internal representations, which are crucial for training models in vision tasks.
We introduce FedProto, which computes client deviations using margins of representations learned on distributed data.
arXiv Detail & Related papers (2021-05-19T08:29:12Z) - Over-the-Air Federated Learning from Heterogeneous Data [107.05618009955094]
Federated learning (FL) is a framework for distributed learning of centralized models.
We develop a Convergent OTA FL (COTAF) algorithm which enhances the common local gradient descent (SGD) FL algorithm.
We numerically show that the precoding induced by COTAF notably improves the convergence rate and the accuracy of models trained via OTA FL.
arXiv Detail & Related papers (2020-09-27T08:28:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.