MUDGUARD: Taming Malicious Majorities in Federated Learning using
Privacy-Preserving Byzantine-Robust Clustering
- URL: http://arxiv.org/abs/2208.10161v2
- Date: Tue, 14 Nov 2023 11:36:24 GMT
- Title: MUDGUARD: Taming Malicious Majorities in Federated Learning using
Privacy-Preserving Byzantine-Robust Clustering
- Authors: Rui Wang, Xingkai Wang, Huanhuan Chen, J\'er\'emie Decouchant, Stjepan
Picek, Nikolaos Laoutaris and Kaitai Liang
- Abstract summary: Byzantine-robust Federated Learning (FL) aims to counter malicious clients and train an accurate global model while maintaining an extremely low attack success rate.
Most existing systems, however, are only robust when most of the clients are honest.
We propose a novel Byzantine-robust and privacy-preserving FL system, called MUDGUARD, that can operate under malicious minority emphor majority in both the server and client sides.
- Score: 34.429892915267686
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Byzantine-robust Federated Learning (FL) aims to counter malicious clients
and train an accurate global model while maintaining an extremely low attack
success rate. Most existing systems, however, are only robust when most of the
clients are honest. FLTrust (NDSS '21) and Zeno++ (ICML '20) do not make such
an honest majority assumption but can only be applied to scenarios where the
server is provided with an auxiliary dataset used to filter malicious updates.
FLAME (USENIX '22) and EIFFeL (CCS '22) maintain the semi-honest majority
assumption to guarantee robustness and the confidentiality of updates. It is
therefore currently impossible to ensure Byzantine robustness and
confidentiality of updates without assuming a semi-honest majority. To tackle
this problem, we propose a novel Byzantine-robust and privacy-preserving FL
system, called MUDGUARD, that can operate under malicious minority \emph{or
majority} in both the server and client sides. Based on DBSCAN, we design a new
method for extracting features from model updates via pairwise adjusted cosine
similarity to boost the accuracy of the resulting clustering. To thwart attacks
from a malicious majority, we develop a method called \textit{Model
Segmentation}, that aggregates together only the updates from within a cluster,
sending the corresponding model only to the clients of the corresponding
cluster. The fundamental idea is that even if malicious clients are in their
majority, their poisoned updates cannot harm benign clients if they are
confined only within the malicious cluster. We also leverage multiple
cryptographic tools to conduct clustering without sacrificing training
correctness and updates confidentiality. We present a detailed security proof
and empirical evaluation along with a convergence analysis for MUDGUARD.
Related papers
- FedCAP: Robust Federated Learning via Customized Aggregation and Personalization [13.17735010891312]
Federated learning (FL) has been applied to various privacy-preserving scenarios.
We propose FedCAP, a robust FL framework against both data heterogeneity and Byzantine attacks.
We show that FedCAP performs well in several non-IID settings and shows strong robustness under a series of poisoning attacks.
arXiv Detail & Related papers (2024-10-16T23:01:22Z) - ACCESS-FL: Agile Communication and Computation for Efficient Secure Aggregation in Stable Federated Learning Networks [26.002975401820887]
Federated Learning (FL) is a distributed learning framework designed for privacy-aware applications.
Traditional FL approaches risk exposing sensitive client data when plain model updates are transmitted to the server.
Google's Secure Aggregation (SecAgg) protocol addresses this threat by employing a double-masking technique.
We propose ACCESS-FL, a communication-and-computation-efficient secure aggregation method.
arXiv Detail & Related papers (2024-09-03T09:03:38Z) - PriRoAgg: Achieving Robust Model Aggregation with Minimum Privacy Leakage for Federated Learning [49.916365792036636]
Federated learning (FL) has recently gained significant momentum due to its potential to leverage large-scale distributed user data.
The transmitted model updates can potentially leak sensitive user information, and the lack of central control of the local training process leaves the global model susceptible to malicious manipulations on model updates.
We develop a general framework PriRoAgg, utilizing Lagrange coded computing and distributed zero-knowledge proof, to execute a wide range of robust aggregation algorithms while satisfying aggregated privacy.
arXiv Detail & Related papers (2024-07-12T03:18:08Z) - Privacy-Preserving Aggregation for Decentralized Learning with Byzantine-Robustness [5.735144760031169]
Byzantine clients intentionally disrupt the learning process by broadcasting arbitrary model updates to other clients.
In this paper, we introduce SecureDL, a novel DL protocol designed to enhance the security and privacy of DL against Byzantine threats.
Our experiments show that SecureDL is effective even in the case of attacks by the malicious majority.
arXiv Detail & Related papers (2024-04-27T18:17:36Z) - Mitigating Cross-client GANs-based Attack in Federated Learning [78.06700142712353]
Multi distributed multimedia clients can resort to federated learning (FL) to jointly learn a global shared model.
FL suffers from the cross-client generative adversarial networks (GANs)-based (C-GANs) attack.
We propose Fed-EDKD technique to improve the current popular FL schemes to resist C-GANs attack.
arXiv Detail & Related papers (2023-07-25T08:15:55Z) - Client-specific Property Inference against Secure Aggregation in
Federated Learning [52.8564467292226]
Federated learning has become a widely used paradigm for collaboratively training a common model among different participants.
Many attacks have shown that it is still possible to infer sensitive information such as membership, property, or outright reconstruction of participant data.
We show that simple linear models can effectively capture client-specific properties only from the aggregated model updates.
arXiv Detail & Related papers (2023-03-07T14:11:01Z) - BayBFed: Bayesian Backdoor Defense for Federated Learning [17.433543798151746]
Federated learning (FL) allows participants to jointly train a machine learning model without sharing their private data with others.
BayBFed proposes to utilize probability distributions over client updates to detect malicious updates in FL.
arXiv Detail & Related papers (2023-01-23T16:01:30Z) - FLCert: Provably Secure Federated Learning against Poisoning Attacks [67.8846134295194]
We propose FLCert, an ensemble federated learning framework that is provably secure against poisoning attacks.
Our experiments show that the label predicted by our FLCert for a test input is provably unaffected by a bounded number of malicious clients.
arXiv Detail & Related papers (2022-10-02T17:50:04Z) - zPROBE: Zero Peek Robustness Checks for Federated Learning [18.84828158927185]
Privacy-preserving federated learning allows multiple users to jointly train a model with coordination of a central server.
Keeping the individual updates private allows malicious users to perform Byzantine attacks and degrade the accuracy without being detected.
Our framework, zPROBE, enables Byzantine resilient and secure federated learning.
arXiv Detail & Related papers (2022-06-24T06:20:37Z) - Robust Quantity-Aware Aggregation for Federated Learning [72.59915691824624]
Malicious clients can poison model updates and claim large quantities to amplify the impact of their model updates in the model aggregation.
Existing defense methods for FL, while all handling malicious model updates, either treat all quantities benign or simply ignore/truncate the quantities of all clients.
We propose a robust quantity-aware aggregation algorithm for federated learning, called FedRA, to perform the aggregation with awareness of local data quantities.
arXiv Detail & Related papers (2022-05-22T15:13:23Z) - Towards Bidirectional Protection in Federated Learning [70.36925233356335]
F2ED-LEARNING offers bidirectional defense against malicious centralized server and Byzantine malicious clients.
F2ED-LEARNING securely aggregates each shard's update and launches FilterL2 on updates from different shards.
evaluation shows that F2ED-LEARNING consistently achieves optimal or close-to-optimal performance.
arXiv Detail & Related papers (2020-10-02T19:37:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.