On the (In)security of Peer-to-Peer Decentralized Machine Learning
- URL: http://arxiv.org/abs/2205.08443v3
- Date: Fri, 10 Nov 2023 07:47:56 GMT
- Title: On the (In)security of Peer-to-Peer Decentralized Machine Learning
- Authors: Dario Pasquini, Mathilde Raynal and Carmela Troncoso
- Abstract summary: We introduce a suite of novel attacks for both passive and active decentralized adversaries.
We demonstrate that, contrary to what is claimed by decentralized learning proposers, decentralized learning does not offer any security advantage over federated learning.
- Score: 16.671864590599288
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we carry out the first, in-depth, privacy analysis of
Decentralized Learning -- a collaborative machine learning framework aimed at
addressing the main limitations of federated learning. We introduce a suite of
novel attacks for both passive and active decentralized adversaries. We
demonstrate that, contrary to what is claimed by decentralized learning
proposers, decentralized learning does not offer any security advantage over
federated learning. Rather, it increases the attack surface enabling any user
in the system to perform privacy attacks such as gradient inversion, and even
gain full control over honest users' local model. We also show that, given the
state of the art in protections, privacy-preserving configurations of
decentralized learning require fully connected networks, losing any practical
advantage over the federated setup and therefore completely defeating the
objective of the decentralized approach.
Related papers
- FEDLAD: Federated Evaluation of Deep Leakage Attacks and Defenses [50.921333548391345]
Federated Learning is a privacy preserving decentralized machine learning paradigm.
Recent research has revealed that private ground truth data can be recovered through a gradient technique known as Deep Leakage.
This paper introduces the FEDLAD Framework (Federated Evaluation of Deep Leakage Attacks and Defenses), a comprehensive benchmark for evaluating Deep Leakage attacks and defenses.
arXiv Detail & Related papers (2024-11-05T11:42:26Z) - Fantastyc: Blockchain-based Federated Learning Made Secure and Practical [0.7083294473439816]
Federated Learning is a decentralized framework that enables clients to collaboratively train a machine learning model under the orchestration of a central server without sharing their local data.
The centrality of this framework represents a point of failure which is addressed in literature by blockchain-based federated learning approaches.
We propose Fantastyc, a solution designed to address these challenges that have been never met together in the state of the art.
arXiv Detail & Related papers (2024-06-05T20:01:49Z) - Enhancing Trust and Privacy in Distributed Networks: A Comprehensive Survey on Blockchain-based Federated Learning [51.13534069758711]
Decentralized approaches like blockchain offer a compelling solution by implementing a consensus mechanism among multiple entities.
Federated Learning (FL) enables participants to collaboratively train models while safeguarding data privacy.
This paper investigates the synergy between blockchain's security features and FL's privacy-preserving model training capabilities.
arXiv Detail & Related papers (2024-03-28T07:08:26Z) - Initialisation and Network Effects in Decentralised Federated Learning [1.5961625979922607]
Decentralised federated learning enables collaborative training of individual machine learning models on a distributed network of communicating devices.
This approach avoids central coordination, enhances data privacy and eliminates the risk of a single point of failure.
We propose a strategy for uncoordinated initialisation of the artificial neural networks based on the distribution of eigenvector centralities of the underlying communication network.
arXiv Detail & Related papers (2024-03-23T14:24:36Z) - Decentralized Federated Learning: A Survey on Security and Privacy [15.790159174067174]
Federated learning has been rapidly evolving and gaining popularity in recent years due to its privacy-preserving features.
The exchange of model updates and gradients in this architecture provides new attack surfaces for malicious users.
Trustability and verifiability of decentralized federated learning are also considered in this study.
arXiv Detail & Related papers (2024-01-25T23:35:47Z) - Exploring the Robustness of Decentralized Training for Large Language
Models [51.41850749014054]
Decentralized training of large language models has emerged as an effective way to democratize this technology.
This paper explores the robustness of decentralized training from three main perspectives.
arXiv Detail & Related papers (2023-12-01T04:04:03Z) - When Decentralized Optimization Meets Federated Learning [41.58479981773202]
Federated learning is a new learning paradigm for extracting knowledge from distributed data.
Most existing federated learning approaches concentrate on the centralized setting, which is vulnerable to a single-point failure.
An alternative strategy for addressing this issue is the decentralized communication topology.
arXiv Detail & Related papers (2023-06-05T03:51:14Z) - Finite-Time Consensus Learning for Decentralized Optimization with
Nonlinear Gossiping [77.53019031244908]
We present a novel decentralized learning framework based on nonlinear gossiping (NGO), that enjoys an appealing finite-time consensus property to achieve better synchronization.
Our analysis on how communication delay and randomized chats affect learning further enables the derivation of practical variants.
arXiv Detail & Related papers (2021-11-04T15:36:25Z) - Consensus Control for Decentralized Deep Learning [72.50487751271069]
Decentralized training of deep learning models enables on-device learning over networks, as well as efficient scaling to large compute clusters.
We show in theory that when the training consensus distance is lower than a critical quantity, decentralized training converges as fast as the centralized counterpart.
Our empirical insights allow the principled design of better decentralized training schemes that mitigate the performance drop.
arXiv Detail & Related papers (2021-02-09T13:58:33Z) - Byzantine-resilient Decentralized Stochastic Gradient Descent [85.15773446094576]
We present an in-depth study towards the Byzantine resilience of decentralized learning systems.
We propose UBAR, a novel algorithm to enhance decentralized learning with Byzantine Fault Tolerance.
arXiv Detail & Related papers (2020-02-20T05:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.