Threats and Defenses in Federated Learning Life Cycle: A Comprehensive Survey and Challenges
- URL: http://arxiv.org/abs/2407.06754v2
- Date: Thu, 11 Jul 2024 11:50:03 GMT
- Title: Threats and Defenses in Federated Learning Life Cycle: A Comprehensive Survey and Challenges
- Authors: Yanli Li, Zhongliang Guo, Nan Yang, Huaming Chen, Dong Yuan, Weiping Ding,
- Abstract summary: Federated Learning (FL) offers innovative solutions for privacy-preserving collaborative machine learning (ML)
Despite its promising potential, FL is vulnerable to various attacks due to its distributed nature.
This paper reviews the most representative and state-of-the-art threats and defense frameworks throughout the FL service life cycle.
- Score: 22.77777852049633
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Federated Learning (FL) offers innovative solutions for privacy-preserving collaborative machine learning (ML). Despite its promising potential, FL is vulnerable to various attacks due to its distributed nature, affecting the entire life cycle of FL services. These threats can harm the model's utility or compromise participants' privacy, either directly or indirectly. In response, numerous defense frameworks have been proposed, demonstrating effectiveness in specific settings and scenarios. To provide a clear understanding of the current research landscape, this paper reviews the most representative and state-of-the-art threats and defense frameworks throughout the FL service life cycle. We start by identifying FL threats that harm utility and privacy, including those with potential or direct impacts. Then, we dive into the defense frameworks, analyze the relationship between threats and defenses, and compare the trade-offs among different defense strategies. Finally, we summarize current research bottlenecks and offer insights into future research directions to conclude this survey. We hope this survey sheds light on trustworthy FL research and contributes to the FL community.
Related papers
- Trustworthy Federated Learning: Privacy, Security, and Beyond [37.495790989584584]
Federated Learning (FL) addresses concerns by facilitating collaborative model training across distributed data sources without transferring raw data.
We conduct an extensive survey of the security and privacy issues prevalent in FL, underscoring the vulnerability of communication links and the potential for cyber threats.
We identify the intricate security challenges that arise within the FL frameworks, aiming to contribute to the development of secure and efficient FL systems.
arXiv Detail & Related papers (2024-11-03T14:18:01Z) - Vertical Federated Learning for Effectiveness, Security, Applicability: A Survey [67.48187503803847]
Vertical Federated Learning (VFL) is a privacy-preserving distributed learning paradigm.
Recent research has shown promising results addressing various challenges in VFL.
This survey offers a systematic overview of recent developments.
arXiv Detail & Related papers (2024-05-25T16:05:06Z) - Federated Learning: Attacks, Defenses, Opportunities, and Challenges [0.0]
Many consider federated learning (FL) the start of a new era in AI, yet it is still immature.
FL has not garnered the community's trust since its security and privacy implications are controversial.
This research aims to deliver a complete overview of FL's security and privacy features.
arXiv Detail & Related papers (2024-03-10T03:05:59Z) - A Survey of Federated Unlearning: A Taxonomy, Challenges and Future
Directions [71.16718184611673]
The evolution of privacy-preserving Federated Learning (FL) has led to an increasing demand for implementing the right to be forgotten.
The implementation of selective forgetting is particularly challenging in FL due to its decentralized nature.
Federated Unlearning (FU) emerges as a strategic solution to address the increasing need for data privacy.
arXiv Detail & Related papers (2023-10-30T01:34:33Z) - A Survey of Trustworthy Federated Learning with Perspectives on
Security, Robustness, and Privacy [47.89042524852868]
Federated Learning (FL) stands out as a promising solution for diverse real-world scenarios.
However, challenges around data isolation and privacy threaten the trustworthiness of FL systems.
arXiv Detail & Related papers (2023-02-21T12:52:12Z) - Federated Learning Attacks and Defenses: A Survey [6.980116513609015]
This paper sorts out the possible attacks and corresponding defenses of the current FL system systematically.
In view of the current three classification criteria, namely the three stages of machine learning, the three different roles in federated learning, and the CIA (Confidentiality, Integrity, and Availability) guidelines on privacy protection, we divide attack approaches into two categories according to the training stage and the prediction stage in machine learning.
arXiv Detail & Related papers (2022-11-27T22:07:07Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - Challenges and approaches for mitigating byzantine attacks in federated
learning [6.836162272841266]
Federated learning (FL) is an attractive distributed learning framework in which numerous wireless end-user devices can train a global model with the data remained autochthonous.
Despite the promising prospect, byzantine attack, an intractable threat in conventional distributed network, is discovered to be rather efficacious against FL as well.
We propose a new byzantine attack method called weight attack to defeat those defense schemes, and conduct experiments to demonstrate its threat.
arXiv Detail & Related papers (2021-12-29T09:24:05Z) - Provable Defense against Privacy Leakage in Federated Learning from
Representation Perspective [47.23145404191034]
Federated learning (FL) is a popular distributed learning framework that can reduce privacy risks by not explicitly sharing private data.
Recent works demonstrated that sharing model updates makes FL vulnerable to inference attacks.
We show our key observation that the data representation leakage from gradients is the essential cause of privacy leakage in FL.
arXiv Detail & Related papers (2020-12-08T20:42:12Z) - Privacy and Robustness in Federated Learning: Attacks and Defenses [74.62641494122988]
We conduct the first comprehensive survey on this topic.
Through a concise introduction to the concept of FL, and a unique taxonomy covering: 1) threat models; 2) poisoning attacks and defenses against robustness; 3) inference attacks and defenses against privacy, we provide an accessible review of this important topic.
arXiv Detail & Related papers (2020-12-07T12:11:45Z) - Threats to Federated Learning: A Survey [35.724483191921244]
Federated learning (FL) has emerged as a promising solution under this new reality.
Existing FL protocol design has been shown to exhibit vulnerabilities which can be exploited by adversaries.
This paper provides a concise introduction to the concept of FL, and a unique taxonomy covering threat models and two major attacks on FL.
arXiv Detail & Related papers (2020-03-04T15:30:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.