Secure Byzantine-Robust Distributed Learning via Clustering
- URL: http://arxiv.org/abs/2110.02940v1
- Date: Wed, 6 Oct 2021 17:40:26 GMT
- Title: Secure Byzantine-Robust Distributed Learning via Clustering
- Authors: Raj Kiriti Velicheti, Derek Xia, Oluwasanmi Koyejo
- Abstract summary: Federated learning systems that jointly preserve Byzantine robustness and privacy have remained an open problem.
We propose SHARE, a distributed learning framework designed to cryptographically preserve client update privacy and robustness to Byzantine adversaries simultaneously.
- Score: 16.85310886805588
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning systems that jointly preserve Byzantine robustness and
privacy have remained an open problem. Robust aggregation, the standard defense
for Byzantine attacks, generally requires server access to individual updates
or nonlinear computation -- thus is incompatible with privacy-preserving
methods such as secure aggregation via multiparty computation. To this end, we
propose SHARE (Secure Hierarchical Robust Aggregation), a distributed learning
framework designed to cryptographically preserve client update privacy and
robustness to Byzantine adversaries simultaneously. The key idea is to
incorporate secure averaging among randomly clustered clients before filtering
malicious updates through robust aggregation. Experiments show that SHARE has
similar robustness guarantees as existing techniques while enhancing privacy.
Related papers
- Uncovering Attacks and Defenses in Secure Aggregation for Federated Deep Learning [17.45950557331482]
Federated learning enables the collaborative learning of a global model on diverse data, preserving data locality and eliminating the need to transfer user data to a central server.
Secure aggregation protocols are designed to mask/encrypt user updates and enable a central server to aggregate the masked information.
MicroSecAgg (PoPETS 2024) proposes a single server secure aggregation protocol that aims to mitigate the high communication complexity of the existing approaches.
arXiv Detail & Related papers (2024-10-13T00:06:03Z) - PriRoAgg: Achieving Robust Model Aggregation with Minimum Privacy Leakage for Federated Learning [49.916365792036636]
Federated learning (FL) has recently gained significant momentum due to its potential to leverage large-scale distributed user data.
The transmitted model updates can potentially leak sensitive user information, and the lack of central control of the local training process leaves the global model susceptible to malicious manipulations on model updates.
We develop a general framework PriRoAgg, utilizing Lagrange coded computing and distributed zero-knowledge proof, to execute a wide range of robust aggregation algorithms while satisfying aggregated privacy.
arXiv Detail & Related papers (2024-07-12T03:18:08Z) - Robust Zero Trust Architecture: Joint Blockchain based Federated learning and Anomaly Detection based Framework [17.919501880326383]
This paper introduces a robust zero-trust architecture (ZTA) tailored for the decentralized system that empowers efficient remote work and collaboration within IoT networks.
Using blockchain-based federated learning principles, our proposed framework includes a robust aggregation mechanism designed to counteract malicious updates from compromised clients.
The framework integrates anomaly detection and trust computation, ensuring secure and reliable device collaboration in a decentralized fashion.
arXiv Detail & Related papers (2024-06-24T23:15:19Z) - Privacy-Preserving Aggregation for Decentralized Learning with Byzantine-Robustness [5.735144760031169]
Byzantine clients intentionally disrupt the learning process by broadcasting arbitrary model updates to other clients.
In this paper, we introduce SecureDL, a novel DL protocol designed to enhance the security and privacy of DL against Byzantine threats.
Our experiments show that SecureDL is effective even in the case of attacks by the malicious majority.
arXiv Detail & Related papers (2024-04-27T18:17:36Z) - Secure Aggregation is Not Private Against Membership Inference Attacks [66.59892736942953]
We investigate the privacy implications of SecAgg in federated learning.
We show that SecAgg offers weak privacy against membership inference attacks even in a single training round.
Our findings underscore the imperative for additional privacy-enhancing mechanisms, such as noise injection.
arXiv Detail & Related papers (2024-03-26T15:07:58Z) - ByzSecAgg: A Byzantine-Resistant Secure Aggregation Scheme for Federated
Learning Based on Coded Computing and Vector Commitment [90.60126724503662]
ByzSecAgg is an efficient secure aggregation scheme for federated learning.
ByzSecAgg is protected against Byzantine attacks and privacy leakages.
arXiv Detail & Related papers (2023-02-20T11:15:18Z) - MUDGUARD: Taming Malicious Majorities in Federated Learning using
Privacy-Preserving Byzantine-Robust Clustering [34.429892915267686]
Byzantine-robust Federated Learning (FL) aims to counter malicious clients and train an accurate global model while maintaining an extremely low attack success rate.
Most existing systems, however, are only robust when most of the clients are honest.
We propose a novel Byzantine-robust and privacy-preserving FL system, called MUDGUARD, that can operate under malicious minority emphor majority in both the server and client sides.
arXiv Detail & Related papers (2022-08-22T09:17:58Z) - Robust Federated Learning via Over-The-Air Computation [48.47690125123958]
Simple averaging of model updates via over-the-air computation makes the learning task vulnerable to random or intended modifications of the local model updates of some malicious clients.
We propose a robust transmission and aggregation framework to such attacks while preserving the benefits of over-the-air computation for federated learning.
arXiv Detail & Related papers (2021-11-01T19:21:21Z) - Learning from History for Byzantine Robust Optimization [52.68913869776858]
Byzantine robustness has received significant attention recently given its importance for distributed learning.
We show that most existing robust aggregation rules may not converge even in the absence of any Byzantine attackers.
arXiv Detail & Related papers (2020-12-18T16:22:32Z) - Towards Bidirectional Protection in Federated Learning [70.36925233356335]
F2ED-LEARNING offers bidirectional defense against malicious centralized server and Byzantine malicious clients.
F2ED-LEARNING securely aggregates each shard's update and launches FilterL2 on updates from different shards.
evaluation shows that F2ED-LEARNING consistently achieves optimal or close-to-optimal performance.
arXiv Detail & Related papers (2020-10-02T19:37:02Z) - Byzantine-Resilient Secure Federated Learning [2.578242050187029]
This paper presents the first single-server Byzantine-resilient secure aggregation framework (BREA) for secure federated learning.
BREA is based on an integrated, verifiable detection, and secure model aggregation approach to guarantee Byzantine-resilience convergence simultaneously.
Our experiments demonstrate convergence in the presence of Byzantine users, and comparable accuracy to conventional federated learning benchmarks.
arXiv Detail & Related papers (2020-07-21T22:15:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.