Unlearning during Learning: An Efficient Federated Machine Unlearning Method
- URL: http://arxiv.org/abs/2405.15474v1
- Date: Fri, 24 May 2024 11:53:13 GMT
- Title: Unlearning during Learning: An Efficient Federated Machine Unlearning Method
- Authors: Hanlin Gu, Gongxi Zhu, Jie Zhang, Xinyuan Zhao, Yuxing Han, Lixin Fan, Qiang Yang,
- Abstract summary: Federated Learning (FL) has garnered significant attention as a distributed machine learning paradigm.
To facilitate the implementation of the right to be forgotten, the concept of federated machine unlearning (FMU) has also emerged.
We introduce FedAU, an innovative and efficient FMU framework aimed at overcoming these limitations.
- Score: 20.82138206063572
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In recent years, Federated Learning (FL) has garnered significant attention as a distributed machine learning paradigm. To facilitate the implementation of the right to be forgotten, the concept of federated machine unlearning (FMU) has also emerged. However, current FMU approaches often involve additional time-consuming steps and may not offer comprehensive unlearning capabilities, which renders them less practical in real FL scenarios. In this paper, we introduce FedAU, an innovative and efficient FMU framework aimed at overcoming these limitations. Specifically, FedAU incorporates a lightweight auxiliary unlearning module into the learning process and employs a straightforward linear operation to facilitate unlearning. This approach eliminates the requirement for extra time-consuming steps, rendering it well-suited for FL. Furthermore, FedAU exhibits remarkable versatility. It not only enables multiple clients to carry out unlearning tasks concurrently but also supports unlearning at various levels of granularity, including individual data samples, specific classes, and even at the client level. We conducted extensive experiments on MNIST, CIFAR10, and CIFAR100 datasets to evaluate the performance of FedAU. The results demonstrate that FedAU effectively achieves the desired unlearning effect while maintaining model accuracy.
Related papers
- A Survey on Efficient Federated Learning Methods for Foundation Model Training [62.473245910234304]
Federated Learning (FL) has become an established technique to facilitate privacy-preserving collaborative training across a multitude of clients.
In the wake of Foundation Models (FM), the reality is different for many deep learning applications.
We discuss the benefits and drawbacks of parameter-efficient fine-tuning (PEFT) for FL applications.
arXiv Detail & Related papers (2024-01-09T10:22:23Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - When Do Curricula Work in Federated Learning? [56.88941905240137]
We find that curriculum learning largely alleviates non-IIDness.
The more disparate the data distributions across clients the more they benefit from learning.
We propose a novel client selection technique that benefits from the real-world disparity in the clients.
arXiv Detail & Related papers (2022-12-24T11:02:35Z) - SIFU: Sequential Informed Federated Unlearning for Efficient and Provable Client Unlearning in Federated Optimization [23.064896326146386]
Machine Unlearning (MU) aims at removing the contribution of a given data point from a training procedure.
While Federated Unlearning (FU) methods proposed, we propose SIFU (Sequential Informed Unlearning) as a new method.
arXiv Detail & Related papers (2022-11-21T17:15:46Z) - Federated Learning and Meta Learning: Approaches, Applications, and
Directions [94.68423258028285]
In this tutorial, we present a comprehensive review of FL, meta learning, and federated meta learning (FedMeta)
Unlike other tutorial papers, our objective is to explore how FL, meta learning, and FedMeta methodologies can be designed, optimized, and evolved, and their applications over wireless networks.
arXiv Detail & Related papers (2022-10-24T10:59:29Z) - A Multi-agent Reinforcement Learning Approach for Efficient Client
Selection in Federated Learning [17.55163940659976]
Federated learning (FL) is a training technique that enables client devices to jointly learn a shared model.
We design an efficient FL framework which jointly optimize model accuracy, processing latency and communication efficiency.
Experiments show that FedMarl can significantly improve model accuracy with much lower processing latency and communication cost.
arXiv Detail & Related papers (2022-01-09T05:55:17Z) - Mobility-Aware Cluster Federated Learning in Hierarchical Wireless
Networks [81.83990083088345]
We develop a theoretical model to characterize the hierarchical federated learning (HFL) algorithm in wireless networks.
Our analysis proves that the learning performance of HFL deteriorates drastically with highly-mobile users.
To circumvent these issues, we propose a mobility-aware cluster federated learning (MACFL) algorithm.
arXiv Detail & Related papers (2021-08-20T10:46:58Z) - Federated Robustness Propagation: Sharing Adversarial Robustness in
Federated Learning [98.05061014090913]
Federated learning (FL) emerges as a popular distributed learning schema that learns from a set of participating users without requiring raw data to be shared.
adversarial training (AT) provides a sound solution for centralized learning, extending its usage for FL users has imposed significant challenges.
We show that existing FL techniques cannot effectively propagate adversarial robustness among non-iid users.
We propose a simple yet effective propagation approach that transfers robustness through carefully designed batch-normalization statistics.
arXiv Detail & Related papers (2021-06-18T15:52:33Z) - FedU: A Unified Framework for Federated Multi-Task Learning with
Laplacian Regularization [15.238123204624003]
Federated multi-task learning (FMTL) has emerged as a natural choice to capture the statistical diversity among the clients in federated learning.
To unleash the FMTL beyond statistical diversity, we formulate a new FMTL FedU using Laplacian regularization.
arXiv Detail & Related papers (2021-02-14T13:19:43Z) - Federated Unlearning [24.60965999954735]
Federated learning (FL) has emerged as a promising distributed machine learning (ML) paradigm.
Practical needs of the "right to be forgotten" and countering data poisoning attacks call for efficient techniques that can remove, or unlearn, specific training data from the trained FL model.
We present FedEraser, the first federated unlearning methodology that can eliminate the influence of a federated client's data on the global FL model.
arXiv Detail & Related papers (2020-12-27T08:54:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.