A Survey of Federated Unlearning: A Taxonomy, Challenges and Future
Directions
- URL: http://arxiv.org/abs/2310.19218v3
- Date: Tue, 6 Feb 2024 05:33:51 GMT
- Title: A Survey of Federated Unlearning: A Taxonomy, Challenges and Future
Directions
- Authors: Yang Zhao, Jiaxi Yang, Yiling Tao, Lixu Wang, Xiaoxiao Li, Dusit
Niyato
- Abstract summary: The evolution of privacy-preserving Federated Learning (FL) has led to an increasing demand for implementing the right to be forgotten.
The implementation of selective forgetting is particularly challenging in FL due to its decentralized nature.
Federated Unlearning (FU) emerges as a strategic solution to address the increasing need for data privacy.
- Score: 71.16718184611673
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The evolution of privacy-preserving Federated Learning (FL) has led to an
increasing demand for implementing the right to be forgotten. The
implementation of selective forgetting is particularly challenging in FL due to
its decentralized nature. This complexity has given rise to a new field,
Federated Unlearning (FU). FU emerges as a strategic solution to address the
increasing need for data privacy, including the implementation of the `right to
be forgotten'. The primary challenge in developing FU approaches lies in
balancing the trade-offs in privacy, security, utility, and efficiency, as
these elements often have competing requirements. Achieving an optimal
equilibrium among these facets is crucial for maintaining the effectiveness and
usability of FL systems while adhering to privacy and security standards. This
survey provides a comprehensive analysis of existing FU methods, incorporating
a detailed review of the various evaluation metrics. Furthermore, we unify
these diverse methods and metrics into an experimental framework. Additionally,
the survey discusses potential future research directions in FU. Finally, a
continually updated repository of related open-source materials is available
at: https://github.com/abbottyanginchina/Awesome-Federated-Unlearning.
Related papers
- Advances in APPFL: A Comprehensive and Extensible Federated Learning Framework [1.4206132527980742]
Federated learning (FL) is a distributed machine learning paradigm enabling collaborative model training while preserving data privacy.
We present the recent advances in developing APPFL, a framework and benchmarking suite for federated learning.
We demonstrate the capabilities of APPFL through extensive experiments evaluating various aspects of FL, including communication efficiency, privacy preservation, computational performance, and resource utilization.
arXiv Detail & Related papers (2024-09-17T22:20:26Z) - Vertical Federated Learning for Effectiveness, Security, Applicability: A Survey [67.48187503803847]
Vertical Federated Learning (VFL) is a privacy-preserving distributed learning paradigm.
Recent research has shown promising results addressing various challenges in VFL.
This survey offers a systematic overview of recent developments.
arXiv Detail & Related papers (2024-05-25T16:05:06Z) - Advances and Open Challenges in Federated Foundation Models [34.37509703688661]
The integration of Foundation Models (FMs) with Federated Learning (FL) presents a transformative paradigm in Artificial Intelligence (AI)
This paper provides a comprehensive survey of the emerging field of Federated Foundation Models (FedFM)
arXiv Detail & Related papers (2024-04-23T09:44:58Z) - Federated Learning with New Knowledge: Fundamentals, Advances, and
Futures [69.8830772538421]
This paper systematically defines the main sources of new knowledge in Federated Learning (FL)
We examine the impact of the form and timing of new knowledge arrival on the incorporation process.
We discuss the potential future directions for FL with new knowledge, considering a variety of factors such as scenario setups, efficiency, and security.
arXiv Detail & Related papers (2024-02-03T21:29:31Z) - Federated Unlearning: A Survey on Methods, Design Guidelines, and Evaluation Metrics [2.7456900944642686]
Federated unlearning (FU) algorithms efficiently remove clients' contributions without full model retraining.
This article provides background concepts, empirical evidence, and practical guidelines to design/implement efficient FU schemes.
arXiv Detail & Related papers (2024-01-10T13:26:19Z) - A Survey on Federated Unlearning: Challenges, Methods, and Future Directions [21.90319100485268]
In recent years, the notion of the right to be forgotten" (RTBF) has become a crucial aspect of data privacy for digital trust and AI safety.
Machine unlearning (MU) has gained considerable attention which allows an ML model to selectively eliminate identifiable information.
FU has emerged to confront the challenge of data erasure within federated learning settings.
arXiv Detail & Related papers (2023-10-31T13:32:00Z) - A Survey of Trustworthy Federated Learning with Perspectives on
Security, Robustness, and Privacy [47.89042524852868]
Federated Learning (FL) stands out as a promising solution for diverse real-world scenarios.
However, challenges around data isolation and privacy threaten the trustworthiness of FL systems.
arXiv Detail & Related papers (2023-02-21T12:52:12Z) - Towards Federated Long-Tailed Learning [76.50892783088702]
Data privacy and class imbalance are the norm rather than the exception in many machine learning tasks.
Recent attempts have been launched to, on one side, address the problem of learning from pervasive private data, and on the other side, learn from long-tailed data.
This paper focuses on learning with long-tailed (LT) data distributions under the context of the popular privacy-preserved federated learning (FL) framework.
arXiv Detail & Related papers (2022-06-30T02:34:22Z) - Privacy and Robustness in Federated Learning: Attacks and Defenses [74.62641494122988]
We conduct the first comprehensive survey on this topic.
Through a concise introduction to the concept of FL, and a unique taxonomy covering: 1) threat models; 2) poisoning attacks and defenses against robustness; 3) inference attacks and defenses against privacy, we provide an accessible review of this important topic.
arXiv Detail & Related papers (2020-12-07T12:11:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.