Decentralized Federated Learning: A Survey on Security and Privacy
- URL: http://arxiv.org/abs/2401.17319v1
- Date: Thu, 25 Jan 2024 23:35:47 GMT
- Title: Decentralized Federated Learning: A Survey on Security and Privacy
- Authors: Ehsan Hallaji and Roozbeh Razavi-Far and Mehrdad Saif and Boyu Wang
and Qiang Yang
- Abstract summary: Federated learning has been rapidly evolving and gaining popularity in recent years due to its privacy-preserving features.
The exchange of model updates and gradients in this architecture provides new attack surfaces for malicious users.
Trustability and verifiability of decentralized federated learning are also considered in this study.
- Score: 15.790159174067174
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning has been rapidly evolving and gaining popularity in recent
years due to its privacy-preserving features, among other advantages.
Nevertheless, the exchange of model updates and gradients in this architecture
provides new attack surfaces for malicious users of the network which may
jeopardize the model performance and user and data privacy. For this reason,
one of the main motivations for decentralized federated learning is to
eliminate server-related threats by removing the server from the network and
compensating for it through technologies such as blockchain. However, this
advantage comes at the cost of challenging the system with new privacy threats.
Thus, performing a thorough security analysis in this new paradigm is
necessary. This survey studies possible variations of threats and adversaries
in decentralized federated learning and overviews the potential defense
mechanisms. Trustability and verifiability of decentralized federated learning
are also considered in this study.
Related papers
- FEDLAD: Federated Evaluation of Deep Leakage Attacks and Defenses [50.921333548391345]
Federated Learning is a privacy preserving decentralized machine learning paradigm.
Recent research has revealed that private ground truth data can be recovered through a gradient technique known as Deep Leakage.
This paper introduces the FEDLAD Framework (Federated Evaluation of Deep Leakage Attacks and Defenses), a comprehensive benchmark for evaluating Deep Leakage attacks and defenses.
arXiv Detail & Related papers (2024-11-05T11:42:26Z) - Federated Learning with Blockchain-Enhanced Machine Unlearning: A Trustworthy Approach [20.74679353443655]
We introduce a framework that melds blockchain with federated learning, thereby ensuring an immutable record of unlearning requests and actions.
Our key contributions encompass a certification mechanism for the unlearning process, the enhancement of data security and privacy, and the optimization of data management.
arXiv Detail & Related papers (2024-05-27T04:35:49Z) - Enhancing Trust and Privacy in Distributed Networks: A Comprehensive Survey on Blockchain-based Federated Learning [51.13534069758711]
Decentralized approaches like blockchain offer a compelling solution by implementing a consensus mechanism among multiple entities.
Federated Learning (FL) enables participants to collaboratively train models while safeguarding data privacy.
This paper investigates the synergy between blockchain's security features and FL's privacy-preserving model training capabilities.
arXiv Detail & Related papers (2024-03-28T07:08:26Z) - Blockchain-enabled Trustworthy Federated Unlearning [50.01101423318312]
Federated unlearning is a promising paradigm for protecting the data ownership of distributed clients.
Existing works require central servers to retain the historical model parameters from distributed clients.
This paper proposes a new blockchain-enabled trustworthy federated unlearning framework.
arXiv Detail & Related papers (2024-01-29T07:04:48Z) - Security and Privacy Issues and Solutions in Federated Learning for
Digital Healthcare [0.0]
We present vulnerabilities, attacks, and defenses based on the widened attack surfaces of Federated Learning.
We suggest promising new research directions toward a more robust FL.
arXiv Detail & Related papers (2024-01-16T16:07:53Z) - Security and Privacy Issues of Federated Learning [0.0]
Federated Learning (FL) has emerged as a promising approach to address data privacy and confidentiality concerns.
This paper presents a comprehensive taxonomy of security and privacy challenges in Federated Learning (FL) across various machine learning models.
arXiv Detail & Related papers (2023-07-22T22:51:07Z) - Federated and Transfer Learning: A Survey on Adversaries and Defense
Mechanisms [4.5441516134546385]
The main goal of this study is to uncover potential vulnerabilities and defense mechanisms that might compromise the privacy and performance of systems that use federated and transfer learning.
arXiv Detail & Related papers (2022-07-05T22:07:26Z) - On the (In)security of Peer-to-Peer Decentralized Machine Learning [16.671864590599288]
We introduce a suite of novel attacks for both passive and active decentralized adversaries.
We demonstrate that, contrary to what is claimed by decentralized learning proposers, decentralized learning does not offer any security advantage over federated learning.
arXiv Detail & Related papers (2022-05-17T15:36:50Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z) - Privacy and Robustness in Federated Learning: Attacks and Defenses [74.62641494122988]
We conduct the first comprehensive survey on this topic.
Through a concise introduction to the concept of FL, and a unique taxonomy covering: 1) threat models; 2) poisoning attacks and defenses against robustness; 3) inference attacks and defenses against privacy, we provide an accessible review of this important topic.
arXiv Detail & Related papers (2020-12-07T12:11:45Z) - Byzantine-resilient Decentralized Stochastic Gradient Descent [85.15773446094576]
We present an in-depth study towards the Byzantine resilience of decentralized learning systems.
We propose UBAR, a novel algorithm to enhance decentralized learning with Byzantine Fault Tolerance.
arXiv Detail & Related papers (2020-02-20T05:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.