Threats to Federated Learning: A Survey
- URL: http://arxiv.org/abs/2003.02133v1
- Date: Wed, 4 Mar 2020 15:30:10 GMT
- Title: Threats to Federated Learning: A Survey
- Authors: Lingjuan Lyu, Han Yu, Qiang Yang
- Abstract summary: Federated learning (FL) has emerged as a promising solution under this new reality.
Existing FL protocol design has been shown to exhibit vulnerabilities which can be exploited by adversaries.
This paper provides a concise introduction to the concept of FL, and a unique taxonomy covering threat models and two major attacks on FL.
- Score: 35.724483191921244
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the emergence of data silos and popular privacy awareness, the
traditional centralized approach of training artificial intelligence (AI)
models is facing strong challenges. Federated learning (FL) has recently
emerged as a promising solution under this new reality. Existing FL protocol
design has been shown to exhibit vulnerabilities which can be exploited by
adversaries both within and without the system to compromise data privacy. It
is thus of paramount importance to make FL system designers to be aware of the
implications of future FL algorithm design on privacy-preservation. Currently,
there is no survey on this topic. In this paper, we bridge this important gap
in FL literature. By providing a concise introduction to the concept of FL, and
a unique taxonomy covering threat models and two major attacks on FL: 1)
poisoning attacks and 2) inference attacks, this paper provides an accessible
review of this important topic. We highlight the intuitions, key techniques as
well as fundamental assumptions adopted by various attacks, and discuss
promising future research directions towards more robust privacy preservation
in FL.
Related papers
- Trustworthy Federated Learning: Privacy, Security, and Beyond [37.495790989584584]
Federated Learning (FL) addresses concerns by facilitating collaborative model training across distributed data sources without transferring raw data.
We conduct an extensive survey of the security and privacy issues prevalent in FL, underscoring the vulnerability of communication links and the potential for cyber threats.
We identify the intricate security challenges that arise within the FL frameworks, aiming to contribute to the development of secure and efficient FL systems.
arXiv Detail & Related papers (2024-11-03T14:18:01Z) - Privacy Attack in Federated Learning is Not Easy: An Experimental Study [5.065947993017158]
Federated learning (FL) is an emerging distributed machine learning paradigm proposed for privacy preservation.
Recent studies have indicated that FL cannot entirely guarantee privacy protection.
It remains uncertain whether privacy attack FL algorithms are effective in realistic federated environments.
arXiv Detail & Related papers (2024-09-28T10:06:34Z) - Federated Learning: Attacks, Defenses, Opportunities, and Challenges [0.0]
Many consider federated learning (FL) the start of a new era in AI, yet it is still immature.
FL has not garnered the community's trust since its security and privacy implications are controversial.
This research aims to deliver a complete overview of FL's security and privacy features.
arXiv Detail & Related papers (2024-03-10T03:05:59Z) - Federated Learning with New Knowledge: Fundamentals, Advances, and
Futures [69.8830772538421]
This paper systematically defines the main sources of new knowledge in Federated Learning (FL)
We examine the impact of the form and timing of new knowledge arrival on the incorporation process.
We discuss the potential future directions for FL with new knowledge, considering a variety of factors such as scenario setups, efficiency, and security.
arXiv Detail & Related papers (2024-02-03T21:29:31Z) - A Survey of Federated Unlearning: A Taxonomy, Challenges and Future
Directions [71.16718184611673]
The evolution of privacy-preserving Federated Learning (FL) has led to an increasing demand for implementing the right to be forgotten.
The implementation of selective forgetting is particularly challenging in FL due to its decentralized nature.
Federated Unlearning (FU) emerges as a strategic solution to address the increasing need for data privacy.
arXiv Detail & Related papers (2023-10-30T01:34:33Z) - A Survey of Trustworthy Federated Learning with Perspectives on
Security, Robustness, and Privacy [47.89042524852868]
Federated Learning (FL) stands out as a promising solution for diverse real-world scenarios.
However, challenges around data isolation and privacy threaten the trustworthiness of FL systems.
arXiv Detail & Related papers (2023-02-21T12:52:12Z) - Federated Learning Attacks and Defenses: A Survey [6.980116513609015]
This paper sorts out the possible attacks and corresponding defenses of the current FL system systematically.
In view of the current three classification criteria, namely the three stages of machine learning, the three different roles in federated learning, and the CIA (Confidentiality, Integrity, and Availability) guidelines on privacy protection, we divide attack approaches into two categories according to the training stage and the prediction stage in machine learning.
arXiv Detail & Related papers (2022-11-27T22:07:07Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z) - Privacy and Robustness in Federated Learning: Attacks and Defenses [74.62641494122988]
We conduct the first comprehensive survey on this topic.
Through a concise introduction to the concept of FL, and a unique taxonomy covering: 1) threat models; 2) poisoning attacks and defenses against robustness; 3) inference attacks and defenses against privacy, we provide an accessible review of this important topic.
arXiv Detail & Related papers (2020-12-07T12:11:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.