Towards Bidirectional Protection in Federated Learning
- URL: http://arxiv.org/abs/2010.01175v2
- Date: Sun, 30 May 2021 20:24:12 GMT
- Title: Towards Bidirectional Protection in Federated Learning
- Authors: Lun Wang, Qi Pang, Shuai Wang and Dawn Song
- Abstract summary: F2ED-LEARNING offers bidirectional defense against malicious centralized server and Byzantine malicious clients.
F2ED-LEARNING securely aggregates each shard's update and launches FilterL2 on updates from different shards.
evaluation shows that F2ED-LEARNING consistently achieves optimal or close-to-optimal performance.
- Score: 70.36925233356335
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Prior efforts in enhancing federated learning (FL) security fall into two
categories. At one end of the spectrum, some work uses secure aggregation
techniques to hide the individual client's updates and only reveal the
aggregated global update to a malicious server that strives to infer the
clients' privacy from their updates. At the other end of the spectrum, some
work uses Byzantine-robust FL protocols to suppress the influence of malicious
clients' updates. We present a federated learning protocol F2ED-LEARNING,
which, for the first time, offers bidirectional defense to simultaneously
combat against the malicious centralized server and Byzantine malicious
clients. To defend against Byzantine malicious clients, F2ED-LEARNING provides
dimension-free estimation error by employing and calibrating a well-studied
robust mean estimator FilterL2. F2ED-LEARNING also leverages secure aggregation
to protect clients from a malicious server. One key challenge of F2ED-LEARNING
is to address the incompatibility between FilterL2 and secure aggregation
schemes. Concretely, FilterL2 has to check the individual updates from clients
whereas secure aggregation hides those updates from the malicious server. To
this end, we propose a practical and highly effective solution to split the
clients into shards, where F2ED-LEARNING securely aggregates each shard's
update and launches FilterL2 on updates from different shards. The evaluation
shows that F2ED-LEARNING consistently achieves optimal or close-to-optimal
performance and outperforms five secure FL protocols under five popular
attacks.
Related papers
- ACCESS-FL: Agile Communication and Computation for Efficient Secure Aggregation in Stable Federated Learning Networks [26.002975401820887]
Federated Learning (FL) is a distributed learning framework designed for privacy-aware applications.
Traditional FL approaches risk exposing sensitive client data when plain model updates are transmitted to the server.
Google's Secure Aggregation (SecAgg) protocol addresses this threat by employing a double-masking technique.
We propose ACCESS-FL, a communication-and-computation-efficient secure aggregation method.
arXiv Detail & Related papers (2024-09-03T09:03:38Z) - Privacy-Preserving Aggregation for Decentralized Learning with Byzantine-Robustness [5.735144760031169]
Byzantine clients intentionally disrupt the learning process by broadcasting arbitrary model updates to other clients.
In this paper, we introduce SecureDL, a novel DL protocol designed to enhance the security and privacy of DL against Byzantine threats.
Our experiments show that SecureDL is effective even in the case of attacks by the malicious majority.
arXiv Detail & Related papers (2024-04-27T18:17:36Z) - Robust Federated Learning Mitigates Client-side Training Data Distribution Inference Attacks [48.70867241987739]
InferGuard is a novel Byzantine-robust aggregation rule aimed at defending against client-side training data distribution inference attacks.
The results of our experiments indicate that our defense mechanism is highly effective in protecting against client-side training data distribution inference attacks.
arXiv Detail & Related papers (2024-03-05T17:41:35Z) - Robust and Actively Secure Serverless Collaborative Learning [48.01929996757643]
Collaborative machine learning (ML) is widely used to enable institutions to learn better models from distributed data.
While collaborative approaches to learning intuitively protect user data, they remain vulnerable to either the server, the clients, or both.
We propose a peer-to-peer (P2P) learning scheme that is secure against malicious servers and robust to malicious clients.
arXiv Detail & Related papers (2023-10-25T14:43:03Z) - FedDefender: Client-Side Attack-Tolerant Federated Learning [60.576073964874]
Federated learning enables learning from decentralized data sources without compromising privacy.
It is vulnerable to model poisoning attacks, where malicious clients interfere with the training process.
We propose a new defense mechanism that focuses on the client-side, called FedDefender, to help benign clients train robust local models.
arXiv Detail & Related papers (2023-07-18T08:00:41Z) - ScionFL: Efficient and Robust Secure Quantized Aggregation [36.668162197302365]
We introduce ScionFL, the first secure aggregation framework for federated learning.
It operates efficiently on quantized inputs and simultaneously provides robustness against malicious clients.
We show that with no overhead for clients and moderate overhead for the server, we obtain comparable accuracy for standard FL benchmarks.
arXiv Detail & Related papers (2022-10-13T21:46:55Z) - FLCert: Provably Secure Federated Learning against Poisoning Attacks [67.8846134295194]
We propose FLCert, an ensemble federated learning framework that is provably secure against poisoning attacks.
Our experiments show that the label predicted by our FLCert for a test input is provably unaffected by a bounded number of malicious clients.
arXiv Detail & Related papers (2022-10-02T17:50:04Z) - MUDGUARD: Taming Malicious Majorities in Federated Learning using
Privacy-Preserving Byzantine-Robust Clustering [34.429892915267686]
Byzantine-robust Federated Learning (FL) aims to counter malicious clients and train an accurate global model while maintaining an extremely low attack success rate.
Most existing systems, however, are only robust when most of the clients are honest.
We propose a novel Byzantine-robust and privacy-preserving FL system, called MUDGUARD, that can operate under malicious minority emphor majority in both the server and client sides.
arXiv Detail & Related papers (2022-08-22T09:17:58Z) - Robust Quantity-Aware Aggregation for Federated Learning [72.59915691824624]
Malicious clients can poison model updates and claim large quantities to amplify the impact of their model updates in the model aggregation.
Existing defense methods for FL, while all handling malicious model updates, either treat all quantities benign or simply ignore/truncate the quantities of all clients.
We propose a robust quantity-aware aggregation algorithm for federated learning, called FedRA, to perform the aggregation with awareness of local data quantities.
arXiv Detail & Related papers (2022-05-22T15:13:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.