Privacy and Robustness in Federated Learning: Attacks and Defenses
- URL: http://arxiv.org/abs/2012.06337v1
- Date: Mon, 7 Dec 2020 12:11:45 GMT
- Title: Privacy and Robustness in Federated Learning: Attacks and Defenses
- Authors: Lingjuan Lyu, Han Yu, Xingjun Ma, Lichao Sun, Jun Zhao, Qiang Yang,
Philip S. Yu
- Abstract summary: We conduct the first comprehensive survey on this topic.
Through a concise introduction to the concept of FL, and a unique taxonomy covering: 1) threat models; 2) poisoning attacks and defenses against robustness; 3) inference attacks and defenses against privacy, we provide an accessible review of this important topic.
- Score: 74.62641494122988
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As data are increasingly being stored in different silos and societies
becoming more aware of data privacy issues, the traditional centralized
training of artificial intelligence (AI) models is facing efficiency and
privacy challenges. Recently, federated learning (FL) has emerged as an
alternative solution and continue to thrive in this new reality. Existing FL
protocol design has been shown to be vulnerable to adversaries within or
outside of the system, compromising data privacy and system robustness. Besides
training powerful global models, it is of paramount importance to design FL
systems that have privacy guarantees and are resistant to different types of
adversaries. In this paper, we conduct the first comprehensive survey on this
topic. Through a concise introduction to the concept of FL, and a unique
taxonomy covering: 1) threat models; 2) poisoning attacks and defenses against
robustness; 3) inference attacks and defenses against privacy, we provide an
accessible review of this important topic. We highlight the intuitions, key
techniques as well as fundamental assumptions adopted by various attacks and
defenses. Finally, we discuss promising future research directions towards
robust and privacy-preserving federated learning.
Related papers
- SpooFL: Spoofing Federated Learning [54.05993847488204]
We introduce a spoofing-based defense that deceives attackers into believing they have recovered the true training data.<n>Unlike prior synthetic-data defenses that share classes or distributions with the private data, SFL uses a state-of-the-art generative model trained on an external dataset.<n>As a result, attackers are misled into recovering plausible yet completely irrelevant samples, preventing meaningful data leakage.
arXiv Detail & Related papers (2026-01-21T14:57:18Z) - On the Security and Privacy of Federated Learning: A Survey with Attacks, Defenses, Frameworks, Applications, and Future Directions [1.7056096558557128]
Federated Learning (FL) is an emerging distributed machine learning paradigm enabling clients to train a global model collaboratively without sharing their raw data.<n>While FL enhances data privacy by design, it remains vulnerable to various security and privacy threats.<n>Security-enhancing methods aim to improve FL robustness against malicious behaviors such as byzantine attacks, poisoning, and Sybil attacks.<n>Privacy-preserving techniques focus on protecting sensitive data through cryptographic approaches, differential privacy, and secure aggregation.
arXiv Detail & Related papers (2025-08-19T11:06:20Z) - FEDLAD: Federated Evaluation of Deep Leakage Attacks and Defenses [50.921333548391345]
Federated Learning is a privacy preserving decentralized machine learning paradigm.
Recent research has revealed that private ground truth data can be recovered through a gradient technique known as Deep Leakage.
This paper introduces the FEDLAD Framework (Federated Evaluation of Deep Leakage Attacks and Defenses), a comprehensive benchmark for evaluating Deep Leakage attacks and defenses.
arXiv Detail & Related papers (2024-11-05T11:42:26Z) - Trustworthy Federated Learning: Privacy, Security, and Beyond [37.495790989584584]
Federated Learning (FL) addresses concerns by facilitating collaborative model training across distributed data sources without transferring raw data.
We conduct an extensive survey of the security and privacy issues prevalent in FL, underscoring the vulnerability of communication links and the potential for cyber threats.
We identify the intricate security challenges that arise within the FL frameworks, aiming to contribute to the development of secure and efficient FL systems.
arXiv Detail & Related papers (2024-11-03T14:18:01Z) - Federated Learning Privacy: Attacks, Defenses, Applications, and Policy Landscape - A Survey [27.859861825159342]
Deep learning has shown incredible potential across a vast array of tasks.
Recent concerns on privacy have further highlighted challenges for accessing such data.
Federated learning has emerged as an important privacy-preserving technology.
arXiv Detail & Related papers (2024-05-06T16:55:20Z) - A Survey of Federated Unlearning: A Taxonomy, Challenges and Future
Directions [71.16718184611673]
The evolution of privacy-preserving Federated Learning (FL) has led to an increasing demand for implementing the right to be forgotten.
The implementation of selective forgetting is particularly challenging in FL due to its decentralized nature.
Federated Unlearning (FU) emerges as a strategic solution to address the increasing need for data privacy.
arXiv Detail & Related papers (2023-10-30T01:34:33Z) - Privacy and Fairness in Federated Learning: on the Perspective of
Trade-off [58.204074436129716]
Federated learning (FL) has been a hot topic in recent years.
As two crucial ethical notions, the interactions between privacy and fairness are comparatively less studied.
arXiv Detail & Related papers (2023-06-25T04:38:19Z) - A Survey of Trustworthy Federated Learning with Perspectives on
Security, Robustness, and Privacy [47.89042524852868]
Federated Learning (FL) stands out as a promising solution for diverse real-world scenarios.
However, challenges around data isolation and privacy threaten the trustworthiness of FL systems.
arXiv Detail & Related papers (2023-02-21T12:52:12Z) - Federated Learning Attacks and Defenses: A Survey [6.980116513609015]
This paper sorts out the possible attacks and corresponding defenses of the current FL system systematically.
In view of the current three classification criteria, namely the three stages of machine learning, the three different roles in federated learning, and the CIA (Confidentiality, Integrity, and Availability) guidelines on privacy protection, we divide attack approaches into two categories according to the training stage and the prediction stage in machine learning.
arXiv Detail & Related papers (2022-11-27T22:07:07Z) - 10 Security and Privacy Problems in Large Foundation Models [69.70602220716718]
A pre-trained foundation model is like an operating system'' of the AI ecosystem.
A security or privacy issue of a pre-trained foundation model leads to a single point of failure for the AI ecosystem.
In this book chapter, we discuss 10 basic security and privacy problems for the pre-trained foundation models.
arXiv Detail & Related papers (2021-10-28T21:45:53Z) - Trustworthy AI [75.99046162669997]
Brittleness to minor adversarial changes in the input data, ability to explain the decisions, address the bias in their training data, are some of the most prominent limitations.
We propose the tutorial on Trustworthy AI to address six critical issues in enhancing user and public trust in AI systems.
arXiv Detail & Related papers (2020-11-02T20:04:18Z) - Threats to Federated Learning: A Survey [35.724483191921244]
Federated learning (FL) has emerged as a promising solution under this new reality.
Existing FL protocol design has been shown to exhibit vulnerabilities which can be exploited by adversaries.
This paper provides a concise introduction to the concept of FL, and a unique taxonomy covering threat models and two major attacks on FL.
arXiv Detail & Related papers (2020-03-04T15:30:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.