Can Decentralized Learning be more robust than Federated Learning?
- URL: http://arxiv.org/abs/2303.03829v1
- Date: Tue, 7 Mar 2023 11:53:37 GMT
- Title: Can Decentralized Learning be more robust than Federated Learning?
- Authors: Mathilde Raynal and Dario Pasquini and Carmela Troncoso
- Abstract summary: We introduce two textitnew attacks against Decentralized Learning (DL)
We demonstrate our attacks' efficiency against Self-Centered Clipping, the state-of---the-art robust DL protocol.
We show that the capabilities decentralization grants to Byzantine users result in decentralized learning emphalways providing less robustness than federated learning.
- Score: 8.873449722727026
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Decentralized Learning (DL) is a peer--to--peer learning approach that allows
a group of users to jointly train a machine learning model. To ensure
correctness, DL should be robust, i.e., Byzantine users must not be able to
tamper with the result of the collaboration. In this paper, we introduce two
\textit{new} attacks against DL where a Byzantine user can: make the network
converge to an arbitrary model of their choice, and exclude an arbitrary user
from the learning process. We demonstrate our attacks' efficiency against
Self--Centered Clipping, the state--of--the--art robust DL protocol. Finally,
we show that the capabilities decentralization grants to Byzantine users result
in decentralized learning \emph{always} providing less robustness than
federated learning.
Related papers
- Privacy-Preserving Aggregation for Decentralized Learning with Byzantine-Robustness [5.735144760031169]
Byzantine clients intentionally disrupt the learning process by broadcasting arbitrary model updates to other clients.
In this paper, we introduce SecureDL, a novel DL protocol designed to enhance the security and privacy of DL against Byzantine threats.
Our experiments show that SecureDL is effective even in the case of attacks by the malicious majority.
arXiv Detail & Related papers (2024-04-27T18:17:36Z) - Communication Efficient ConFederated Learning: An Event-Triggered SAGA
Approach [67.27031215756121]
Federated learning (FL) is a machine learning paradigm that targets model training without gathering the local data over various data sources.
Standard FL, which employs a single server, can only support a limited number of users, leading to degraded learning capability.
In this work, we consider a multi-server FL framework, referred to as emphConfederated Learning (CFL) in order to accommodate a larger number of users.
arXiv Detail & Related papers (2024-02-28T03:27:10Z) - FedDefender: Client-Side Attack-Tolerant Federated Learning [60.576073964874]
Federated learning enables learning from decentralized data sources without compromising privacy.
It is vulnerable to model poisoning attacks, where malicious clients interfere with the training process.
We propose a new defense mechanism that focuses on the client-side, called FedDefender, to help benign clients train robust local models.
arXiv Detail & Related papers (2023-07-18T08:00:41Z) - On the (In)security of Peer-to-Peer Decentralized Machine Learning [16.671864590599288]
We introduce a suite of novel attacks for both passive and active decentralized adversaries.
We demonstrate that, contrary to what is claimed by decentralized learning proposers, decentralized learning does not offer any security advantage over federated learning.
arXiv Detail & Related papers (2022-05-17T15:36:50Z) - RobustFed: A Truth Inference Approach for Robust Federated Learning [9.316565110931743]
Federated learning is a framework that enables clients to train a collaboratively global model under a central server's orchestration.
The aggregation step in federated learning is vulnerable to adversarial attacks as the central server cannot manage clients' behavior.
We propose a novel robust aggregation algorithm inspired by the truth inference methods in crowdsourcing.
arXiv Detail & Related papers (2021-07-18T09:34:57Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z) - Secure Distributed Training at Scale [65.7538150168154]
Training in presence of peers requires specialized distributed training algorithms with Byzantine tolerance.
We propose a novel protocol for secure (Byzantine-tolerant) decentralized training that emphasizes communication efficiency.
arXiv Detail & Related papers (2021-06-21T17:00:42Z) - Federated Robustness Propagation: Sharing Adversarial Robustness in
Federated Learning [98.05061014090913]
Federated learning (FL) emerges as a popular distributed learning schema that learns from a set of participating users without requiring raw data to be shared.
adversarial training (AT) provides a sound solution for centralized learning, extending its usage for FL users has imposed significant challenges.
We show that existing FL techniques cannot effectively propagate adversarial robustness among non-iid users.
We propose a simple yet effective propagation approach that transfers robustness through carefully designed batch-normalization statistics.
arXiv Detail & Related papers (2021-06-18T15:52:33Z) - Byzantine-resilient Decentralized Stochastic Gradient Descent [85.15773446094576]
We present an in-depth study towards the Byzantine resilience of decentralized learning systems.
We propose UBAR, a novel algorithm to enhance decentralized learning with Byzantine Fault Tolerance.
arXiv Detail & Related papers (2020-02-20T05:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.