Federated Learning Attacks and Defenses: A Survey
- URL: http://arxiv.org/abs/2211.14952v1
- Date: Sun, 27 Nov 2022 22:07:07 GMT
- Title: Federated Learning Attacks and Defenses: A Survey
- Authors: Yao Chen, Yijie Gui, Hong Lin, Wensheng Gan, Yongdong Wu
- Abstract summary: This paper sorts out the possible attacks and corresponding defenses of the current FL system systematically.
In view of the current three classification criteria, namely the three stages of machine learning, the three different roles in federated learning, and the CIA (Confidentiality, Integrity, and Availability) guidelines on privacy protection, we divide attack approaches into two categories according to the training stage and the prediction stage in machine learning.
- Score: 6.980116513609015
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In terms of artificial intelligence, there are several security and privacy
deficiencies in the traditional centralized training methods of machine
learning models by a server. To address this limitation, federated learning
(FL) has been proposed and is known for breaking down ``data silos" and
protecting the privacy of users. However, FL has not yet gained popularity in
the industry, mainly due to its security, privacy, and high cost of
communication. For the purpose of advancing the research in this field,
building a robust FL system, and realizing the wide application of FL, this
paper sorts out the possible attacks and corresponding defenses of the current
FL system systematically. Firstly, this paper briefly introduces the basic
workflow of FL and related knowledge of attacks and defenses. It reviews a
great deal of research about privacy theft and malicious attacks that have been
studied in recent years. Most importantly, in view of the current three
classification criteria, namely the three stages of machine learning, the three
different roles in federated learning, and the CIA (Confidentiality, Integrity,
and Availability) guidelines on privacy protection, we divide attack approaches
into two categories according to the training stage and the prediction stage in
machine learning. Furthermore, we also identify the CIA property violated for
each attack method and potential attack role. Various defense mechanisms are
then analyzed separately from the level of privacy and security. Finally, we
summarize the possible challenges in the application of FL from the aspect of
attacks and defenses and discuss the future development direction of FL
systems. In this way, the designed FL system has the ability to resist
different attacks and is more secure and stable.
Related papers
- FEDLAD: Federated Evaluation of Deep Leakage Attacks and Defenses [50.921333548391345]
Federated Learning is a privacy preserving decentralized machine learning paradigm.
Recent research has revealed that private ground truth data can be recovered through a gradient technique known as Deep Leakage.
This paper introduces the FEDLAD Framework (Federated Evaluation of Deep Leakage Attacks and Defenses), a comprehensive benchmark for evaluating Deep Leakage attacks and defenses.
arXiv Detail & Related papers (2024-11-05T11:42:26Z) - Enhancing Trust and Privacy in Distributed Networks: A Comprehensive Survey on Blockchain-based Federated Learning [51.13534069758711]
Decentralized approaches like blockchain offer a compelling solution by implementing a consensus mechanism among multiple entities.
Federated Learning (FL) enables participants to collaboratively train models while safeguarding data privacy.
This paper investigates the synergy between blockchain's security features and FL's privacy-preserving model training capabilities.
arXiv Detail & Related papers (2024-03-28T07:08:26Z) - Federated Learning: Attacks, Defenses, Opportunities, and Challenges [0.0]
Many consider federated learning (FL) the start of a new era in AI, yet it is still immature.
FL has not garnered the community's trust since its security and privacy implications are controversial.
This research aims to deliver a complete overview of FL's security and privacy features.
arXiv Detail & Related papers (2024-03-10T03:05:59Z) - SaFL: Sybil-aware Federated Learning with Application to Face
Recognition [13.914187113334222]
Federated Learning (FL) is a machine learning paradigm to conduct collaborative learning among clients on a joint model.
On the downside, FL raises security and privacy concerns that have just started to be studied.
This paper proposes a new defense method against poisoning attacks in FL called SaFL.
arXiv Detail & Related papers (2023-11-07T21:06:06Z) - A Survey of Trustworthy Federated Learning with Perspectives on
Security, Robustness, and Privacy [47.89042524852868]
Federated Learning (FL) stands out as a promising solution for diverse real-world scenarios.
However, challenges around data isolation and privacy threaten the trustworthiness of FL systems.
arXiv Detail & Related papers (2023-02-21T12:52:12Z) - Decepticons: Corrupted Transformers Breach Privacy in Federated Learning
for Language Models [58.631918656336005]
We propose a novel attack that reveals private user text by deploying malicious parameter vectors.
Unlike previous attacks on FL, the attack exploits characteristics of both the Transformer architecture and the token embedding.
arXiv Detail & Related papers (2022-01-29T22:38:21Z) - Meta Federated Learning [57.52103907134841]
Federated Learning (FL) is vulnerable to training time adversarial attacks.
We propose Meta Federated Learning ( Meta-FL) which not only is compatible with secure aggregation protocol but also facilitates defense against backdoor attacks.
arXiv Detail & Related papers (2021-02-10T16:48:32Z) - Privacy and Robustness in Federated Learning: Attacks and Defenses [74.62641494122988]
We conduct the first comprehensive survey on this topic.
Through a concise introduction to the concept of FL, and a unique taxonomy covering: 1) threat models; 2) poisoning attacks and defenses against robustness; 3) inference attacks and defenses against privacy, we provide an accessible review of this important topic.
arXiv Detail & Related papers (2020-12-07T12:11:45Z) - Threats to Federated Learning: A Survey [35.724483191921244]
Federated learning (FL) has emerged as a promising solution under this new reality.
Existing FL protocol design has been shown to exhibit vulnerabilities which can be exploited by adversaries.
This paper provides a concise introduction to the concept of FL, and a unique taxonomy covering threat models and two major attacks on FL.
arXiv Detail & Related papers (2020-03-04T15:30:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.