zPROBE: Zero Peek Robustness Checks for Federated Learning
- URL: http://arxiv.org/abs/2206.12100v3
- Date: Tue, 5 Sep 2023 17:14:01 GMT
- Title: zPROBE: Zero Peek Robustness Checks for Federated Learning
- Authors: Zahra Ghodsi, Mojan Javaheripi, Nojan Sheybani, Xinqiao Zhang, Ke
Huang, Farinaz Koushanfar
- Abstract summary: Privacy-preserving federated learning allows multiple users to jointly train a model with coordination of a central server.
Keeping the individual updates private allows malicious users to perform Byzantine attacks and degrade the accuracy without being detected.
Our framework, zPROBE, enables Byzantine resilient and secure federated learning.
- Score: 18.84828158927185
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Privacy-preserving federated learning allows multiple users to jointly train
a model with coordination of a central server. The server only learns the final
aggregation result, thus the users' (private) training data is not leaked from
the individual model updates. However, keeping the individual updates private
allows malicious users to perform Byzantine attacks and degrade the accuracy
without being detected. Best existing defenses against Byzantine workers rely
on robust rank-based statistics, e.g., median, to find malicious updates.
However, implementing privacy-preserving rank-based statistics is nontrivial
and not scalable in the secure domain, as it requires sorting all individual
updates. We establish the first private robustness check that uses high break
point rank-based statistics on aggregated model updates. By exploiting
randomized clustering, we significantly improve the scalability of our defense
without compromising privacy. We leverage our statistical bounds in
zero-knowledge proofs to detect and remove malicious updates without revealing
the private user updates. Our novel framework, zPROBE, enables Byzantine
resilient and secure federated learning. Empirical evaluations demonstrate that
zPROBE provides a low overhead solution to defend against state-of-the-art
Byzantine attacks while preserving privacy.
Related papers
- PriRoAgg: Achieving Robust Model Aggregation with Minimum Privacy Leakage for Federated Learning [49.916365792036636]
Federated learning (FL) has recently gained significant momentum due to its potential to leverage large-scale distributed user data.
The transmitted model updates can potentially leak sensitive user information, and the lack of central control of the local training process leaves the global model susceptible to malicious manipulations on model updates.
We develop a general framework PriRoAgg, utilizing Lagrange coded computing and distributed zero-knowledge proof, to execute a wide range of robust aggregation algorithms while satisfying aggregated privacy.
arXiv Detail & Related papers (2024-07-12T03:18:08Z) - No Vandalism: Privacy-Preserving and Byzantine-Robust Federated Learning [18.1129191782913]
Federated learning allows several clients to train one machine learning model jointly without sharing private data, providing privacy protection.
Traditional federated learning is vulnerable to poisoning attacks, which can not only decrease the model performance, but also implant malicious backdoors.
In this paper, we aim to build a privacy-preserving and Byzantine-robust federated learning scheme to provide an environment with no vandalism (NoV) against attacks from malicious participants.
arXiv Detail & Related papers (2024-06-03T07:59:10Z) - Secure Aggregation is Not Private Against Membership Inference Attacks [66.59892736942953]
We investigate the privacy implications of SecAgg in federated learning.
We show that SecAgg offers weak privacy against membership inference attacks even in a single training round.
Our findings underscore the imperative for additional privacy-enhancing mechanisms, such as noise injection.
arXiv Detail & Related papers (2024-03-26T15:07:58Z) - Robust Federated Learning Mitigates Client-side Training Data Distribution Inference Attacks [48.70867241987739]
InferGuard is a novel Byzantine-robust aggregation rule aimed at defending against client-side training data distribution inference attacks.
The results of our experiments indicate that our defense mechanism is highly effective in protecting against client-side training data distribution inference attacks.
arXiv Detail & Related papers (2024-03-05T17:41:35Z) - FedDefender: Client-Side Attack-Tolerant Federated Learning [60.576073964874]
Federated learning enables learning from decentralized data sources without compromising privacy.
It is vulnerable to model poisoning attacks, where malicious clients interfere with the training process.
We propose a new defense mechanism that focuses on the client-side, called FedDefender, to help benign clients train robust local models.
arXiv Detail & Related papers (2023-07-18T08:00:41Z) - Client-specific Property Inference against Secure Aggregation in
Federated Learning [52.8564467292226]
Federated learning has become a widely used paradigm for collaboratively training a common model among different participants.
Many attacks have shown that it is still possible to infer sensitive information such as membership, property, or outright reconstruction of participant data.
We show that simple linear models can effectively capture client-specific properties only from the aggregated model updates.
arXiv Detail & Related papers (2023-03-07T14:11:01Z) - MUDGUARD: Taming Malicious Majorities in Federated Learning using
Privacy-Preserving Byzantine-Robust Clustering [34.429892915267686]
Byzantine-robust Federated Learning (FL) aims to counter malicious clients and train an accurate global model while maintaining an extremely low attack success rate.
Most existing systems, however, are only robust when most of the clients are honest.
We propose a novel Byzantine-robust and privacy-preserving FL system, called MUDGUARD, that can operate under malicious minority emphor majority in both the server and client sides.
arXiv Detail & Related papers (2022-08-22T09:17:58Z) - Eluding Secure Aggregation in Federated Learning via Model Inconsistency [2.647302105102753]
Federated learning allows a set of users to train a deep neural network over their private training datasets.
We show that a malicious server can easily elude secure aggregation as if the latter were not in place.
We devise two different attacks capable of inferring information on individual private training datasets.
arXiv Detail & Related papers (2021-11-14T16:09:11Z) - Robbing the Fed: Directly Obtaining Private Data in Federated Learning
with Modified Models [56.0250919557652]
Federated learning has quickly gained popularity with its promises of increased user privacy and efficiency.
Previous attacks on user privacy have been limited in scope and do not scale to gradient updates aggregated over even a handful of data points.
We introduce a new threat model based on minimal but malicious modifications of the shared model architecture.
arXiv Detail & Related papers (2021-10-25T15:52:06Z) - Secure and Privacy-Preserving Federated Learning via Co-Utility [7.428782604099875]
We build a federated learning framework that offers privacy to the participating peers and security against Byzantine and poisoning attacks.
Unlike privacy protection via update aggregation, our approach preserves the values of model updates and hence the accuracy of plain federated learning.
arXiv Detail & Related papers (2021-08-04T08:58:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.