Federated Unlearning: A Survey on Methods, Design Guidelines, and
Evaluation Metrics
- URL: http://arxiv.org/abs/2401.05146v2
- Date: Fri, 16 Feb 2024 15:34:55 GMT
- Title: Federated Unlearning: A Survey on Methods, Design Guidelines, and
Evaluation Metrics
- Authors: Nicol\`o Romandini, Alessio Mora, Carlo Mazzocca, Rebecca Montanari,
Paolo Bellavista
- Abstract summary: Federated Unlearning (FU) algorithms efficiently remove specific clients' contributions without full model retraining.
This survey provides background concepts, empirical evidence and practical guidelines to design/implement FU schemes.
- Score: 2.9093766645364663
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Learning (FL) enables collaborative training of a Machine Learning
(ML) model across multiple parties, facilitating the preservation of users' and
institutions' privacy by keeping data stored locally. Instead of centralizing
raw data, FL exchanges locally refined model parameters to build a global model
incrementally. While FL is more compliant with emerging regulations such as the
European General Data Protection Regulation (GDPR), ensuring the right to be
forgotten in this context - allowing FL participants to remove their data
contributions from the learned model - remains unclear. In addition, it is
recognized that malicious clients may inject backdoors into the global model
through updates, e.g. to generate mispredictions on specially crafted data
examples. Consequently, there is the need for mechanisms that can guarantee
individuals the possibility to remove their data and erase malicious
contributions even after aggregation, without compromising the already acquired
"good" knowledge. This highlights the necessity for novel Federated Unlearning
(FU) algorithms, which can efficiently remove specific clients' contributions
without full model retraining. This survey provides background concepts,
empirical evidence, and practical guidelines to design/implement efficient FU
schemes. Our study includes a detailed analysis of the metrics for evaluating
unlearning in FL and presents an in-depth literature review categorizing
state-of-the-art FU contributions under a novel taxonomy. Finally, we outline
the most relevant and still open technical challenges, by identifying the most
promising research directions in the field.
Related papers
- Privacy-Preserving Federated Unlearning with Certified Client Removal [18.36632825624581]
State-of-the-art methods for unlearning use historical data from FL clients, such as gradients or locally trained models.
We propose Starfish, a privacy-preserving federated unlearning scheme using Two-Party Computation (2PC) techniques and shared historical client data between two non-colluding servers.
arXiv Detail & Related papers (2024-04-15T12:27:07Z) - SoK: Challenges and Opportunities in Federated Unlearning [32.0365189539138]
This SoK paper aims to take a deep look at the emphfederated unlearning literature, with the goal of identifying research trends and challenges in this emerging field.
arXiv Detail & Related papers (2024-03-04T19:35:08Z) - A Survey on Efficient Federated Learning Methods for Foundation Model
Training [66.19763977571114]
Federated Learning (FL) has become an established technique to facilitate privacy-preserving collaborative training across a multitude of clients.
In the wake of Foundation Models (FM), the reality is different for many deep learning applications.
We discuss the benefits and drawbacks of parameter-efficient fine-tuning (PEFT) for FL applications.
arXiv Detail & Related papers (2024-01-09T10:22:23Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - Tunable Soft Prompts are Messengers in Federated Learning [55.924749085481544]
Federated learning (FL) enables multiple participants to collaboratively train machine learning models using decentralized data sources.
The lack of model privacy protection in FL becomes an unneglectable challenge.
We propose a novel FL training approach that accomplishes information exchange among participants via tunable soft prompts.
arXiv Detail & Related papers (2023-11-12T11:01:10Z) - A Survey on Federated Unlearning: Challenges, Methods, and Future Directions [21.90319100485268]
In recent years, the notion of the right to be forgotten" (RTBF) has become a crucial aspect of data privacy for digital trust and AI safety.
Machine unlearning (MU) has gained considerable attention which allows an ML model to selectively eliminate identifiable information.
FU has emerged to confront the challenge of data erasure within federated learning settings.
arXiv Detail & Related papers (2023-10-31T13:32:00Z) - A Survey of Federated Unlearning: A Taxonomy, Challenges and Future
Directions [71.16718184611673]
The evolution of privacy-preserving Federated Learning (FL) has led to an increasing demand for implementing the right to be forgotten.
The implementation of selective forgetting is particularly challenging in FL due to its decentralized nature.
Federated Unlearning (FU) emerges as a strategic solution to address the increasing need for data privacy.
arXiv Detail & Related papers (2023-10-30T01:34:33Z) - Feature Correlation-guided Knowledge Transfer for Federated
Self-supervised Learning [19.505644178449046]
We propose a novel and general method named Federated Self-supervised Learning with Feature-correlation based Aggregation (FedFoA)
Our insight is to utilize feature correlation to align the feature mappings and calibrate the local model updates across clients during their local training process.
We prove that FedFoA is a model-agnostic training framework and can be easily compatible with state-of-the-art unsupervised FL methods.
arXiv Detail & Related papers (2022-11-14T13:59:50Z) - Knowledge Distillation for Federated Learning: a Practical Guide [8.2791533759453]
Federated Learning (FL) enables the training of Deep Learning models without centrally collecting possibly sensitive raw data.
The most used algorithms for FL are parameter-averaging based schemes (e.g., Federated Averaging) that, however, have well known limits.
We provide a review of KD-based algorithms tailored for specific FL issues.
arXiv Detail & Related papers (2022-11-09T08:31:23Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.