A Survey of Federated Unlearning: A Taxonomy, Challenges and Future
Directions
- URL: http://arxiv.org/abs/2310.19218v3
- Date: Tue, 6 Feb 2024 05:33:51 GMT
- Title: A Survey of Federated Unlearning: A Taxonomy, Challenges and Future
Directions
- Authors: Yang Zhao, Jiaxi Yang, Yiling Tao, Lixu Wang, Xiaoxiao Li, Dusit
Niyato
- Abstract summary: The evolution of privacy-preserving Federated Learning (FL) has led to an increasing demand for implementing the right to be forgotten.
The implementation of selective forgetting is particularly challenging in FL due to its decentralized nature.
Federated Unlearning (FU) emerges as a strategic solution to address the increasing need for data privacy.
- Score: 71.16718184611673
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The evolution of privacy-preserving Federated Learning (FL) has led to an
increasing demand for implementing the right to be forgotten. The
implementation of selective forgetting is particularly challenging in FL due to
its decentralized nature. This complexity has given rise to a new field,
Federated Unlearning (FU). FU emerges as a strategic solution to address the
increasing need for data privacy, including the implementation of the `right to
be forgotten'. The primary challenge in developing FU approaches lies in
balancing the trade-offs in privacy, security, utility, and efficiency, as
these elements often have competing requirements. Achieving an optimal
equilibrium among these facets is crucial for maintaining the effectiveness and
usability of FL systems while adhering to privacy and security standards. This
survey provides a comprehensive analysis of existing FU methods, incorporating
a detailed review of the various evaluation metrics. Furthermore, we unify
these diverse methods and metrics into an experimental framework. Additionally,
the survey discusses potential future research directions in FU. Finally, a
continually updated repository of related open-source materials is available
at: https://github.com/abbottyanginchina/Awesome-Federated-Unlearning.
Related papers
- Ten Challenging Problems in Federated Foundation Models [55.343738234307544]
Federated Foundation Models (FedFMs) represent a distributed learning paradigm that fuses general competences of foundation models as well as privacy-preserving capabilities of federated learning.
This paper provides a comprehensive summary of the ten challenging problems inherent in FedFMs, encompassing foundational theory, utilization of private data, continual learning, unlearning, Non-IID and graph data, bidirectional knowledge transfer, incentive mechanism design, game mechanism design, model watermarking, and efficiency.
arXiv Detail & Related papers (2025-02-14T04:01:15Z) - Concurrent vertical and horizontal federated learning with fuzzy cognitive maps [1.104960878651584]
This research introduces a novel federated learning framework employing fuzzy cognitive maps.
It is designed to comprehensively address the challenges posed by diverse data distributions and non-identically distributed features.
The results demonstrate the effectiveness of the approach in achieving the desired learning outcomes while maintaining privacy and confidentiality standards.
arXiv Detail & Related papers (2024-12-17T12:11:14Z) - FEDLAD: Federated Evaluation of Deep Leakage Attacks and Defenses [50.921333548391345]
Federated Learning is a privacy preserving decentralized machine learning paradigm.
Recent research has revealed that private ground truth data can be recovered through a gradient technique known as Deep Leakage.
This paper introduces the FEDLAD Framework (Federated Evaluation of Deep Leakage Attacks and Defenses), a comprehensive benchmark for evaluating Deep Leakage attacks and defenses.
arXiv Detail & Related papers (2024-11-05T11:42:26Z) - Federated Learning driven Large Language Models for Swarm Intelligence: A Survey [2.769238399659845]
Federated learning (FL) offers a compelling framework for training large language models (LLMs)
We focus on machine unlearning, a crucial aspect for complying with privacy regulations like the Right to be Forgotten.
We explore various strategies that enable effective unlearning, such as perturbation techniques, model decomposition, and incremental learning.
arXiv Detail & Related papers (2024-06-14T08:40:58Z) - A Unified and General Framework for Continual Learning [58.72671755989431]
Continual Learning (CL) focuses on learning from dynamic and changing data distributions while retaining previously acquired knowledge.
Various methods have been developed to address the challenge of catastrophic forgetting, including regularization-based, Bayesian-based, and memory-replay-based techniques.
This research aims to bridge this gap by introducing a comprehensive and overarching framework that encompasses and reconciles these existing methodologies.
arXiv Detail & Related papers (2024-03-20T02:21:44Z) - Exploring Machine Learning Models for Federated Learning: A Review of
Approaches, Performance, and Limitations [1.1060425537315088]
Federated learning is a distributed learning framework enhanced to preserve the privacy of individuals' data.
In times of crisis, when real-time decision-making is critical, federated learning allows multiple entities to work collectively without sharing sensitive data.
This paper is a systematic review of the literature on privacy-preserving machine learning in the last few years.
arXiv Detail & Related papers (2023-11-17T19:23:21Z) - When Decentralized Optimization Meets Federated Learning [41.58479981773202]
Federated learning is a new learning paradigm for extracting knowledge from distributed data.
Most existing federated learning approaches concentrate on the centralized setting, which is vulnerable to a single-point failure.
An alternative strategy for addressing this issue is the decentralized communication topology.
arXiv Detail & Related papers (2023-06-05T03:51:14Z) - Combating Exacerbated Heterogeneity for Robust Models in Federated
Learning [91.88122934924435]
Combination of adversarial training and federated learning can lead to the undesired robustness deterioration.
We propose a novel framework called Slack Federated Adversarial Training (SFAT)
We verify the rationality and effectiveness of SFAT on various benchmarked and real-world datasets.
arXiv Detail & Related papers (2023-03-01T06:16:15Z) - A Field Guide to Federated Optimization [161.3779046812383]
Federated learning and analytics are a distributed approach for collaboratively learning models (or statistics) from decentralized data.
This paper provides recommendations and guidelines on formulating, designing, evaluating and analyzing federated optimization algorithms.
arXiv Detail & Related papers (2021-07-14T18:09:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.