FLDetector: Defending Federated Learning Against Model Poisoning Attacks
via Detecting Malicious Clients
- URL: http://arxiv.org/abs/2207.09209v2
- Date: Wed, 20 Jul 2022 03:17:36 GMT
- Title: FLDetector: Defending Federated Learning Against Model Poisoning Attacks
via Detecting Malicious Clients
- Authors: Zaixi Zhang, Xiaoyu Cao, Jinayuan Jia, Neil Zhenqiang Gong
- Abstract summary: Federated learning (FL) is vulnerable to model poisoning attacks.
Malicious clients corrupt the global model via sending manipulated model updates to the server.
Our FLDetector aims to detect and remove the majority of the malicious clients.
- Score: 39.88152764752553
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Federated learning (FL) is vulnerable to model poisoning attacks, in which
malicious clients corrupt the global model via sending manipulated model
updates to the server. Existing defenses mainly rely on Byzantine-robust FL
methods, which aim to learn an accurate global model even if some clients are
malicious. However, they can only resist a small number of malicious clients in
practice. It is still an open challenge how to defend against model poisoning
attacks with a large number of malicious clients. Our FLDetector addresses this
challenge via detecting malicious clients. FLDetector aims to detect and remove
the majority of the malicious clients such that a Byzantine-robust FL method
can learn an accurate global model using the remaining clients. Our key
observation is that, in model poisoning attacks, the model updates from a
client in multiple iterations are inconsistent. Therefore, FLDetector detects
malicious clients via checking their model-updates consistency. Roughly
speaking, the server predicts a client's model update in each iteration based
on its historical model updates using the Cauchy mean value theorem and L-BFGS,
and flags a client as malicious if the received model update from the client
and the predicted model update are inconsistent in multiple iterations. Our
extensive experiments on three benchmark datasets show that FLDetector can
accurately detect malicious clients in multiple state-of-the-art model
poisoning attacks. After removing the detected malicious clients, existing
Byzantine-robust FL methods can learn accurate global models.Our code is
available at https://github.com/zaixizhang/FLDetector.
Related papers
- FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - Protecting Federated Learning from Extreme Model Poisoning Attacks via Multidimensional Time Series Anomaly Detection [1.74243547444997]
We introduce FLANDERS, a novel pre-aggregation filter for FL resilient to large-scale model poisoning attacks.
Experiments conducted in several non-iid FL setups show that FLANDERS significantly improves robustness across a wide spectrum of attacks when paired with standard and robust existing aggregation methods.
arXiv Detail & Related papers (2023-03-29T13:22:20Z) - BayBFed: Bayesian Backdoor Defense for Federated Learning [17.433543798151746]
Federated learning (FL) allows participants to jointly train a machine learning model without sharing their private data with others.
BayBFed proposes to utilize probability distributions over client updates to detect malicious updates in FL.
arXiv Detail & Related papers (2023-01-23T16:01:30Z) - FedRecover: Recovering from Poisoning Attacks in Federated Learning
using Historical Information [67.8846134295194]
Federated learning is vulnerable to poisoning attacks in which malicious clients poison the global model.
We propose FedRecover, which can recover an accurate global model from poisoning attacks with small cost for the clients.
arXiv Detail & Related papers (2022-10-20T00:12:34Z) - FLCert: Provably Secure Federated Learning against Poisoning Attacks [67.8846134295194]
We propose FLCert, an ensemble federated learning framework that is provably secure against poisoning attacks.
Our experiments show that the label predicted by our FLCert for a test input is provably unaffected by a bounded number of malicious clients.
arXiv Detail & Related papers (2022-10-02T17:50:04Z) - Robust Quantity-Aware Aggregation for Federated Learning [72.59915691824624]
Malicious clients can poison model updates and claim large quantities to amplify the impact of their model updates in the model aggregation.
Existing defense methods for FL, while all handling malicious model updates, either treat all quantities benign or simply ignore/truncate the quantities of all clients.
We propose a robust quantity-aware aggregation algorithm for federated learning, called FedRA, to perform the aggregation with awareness of local data quantities.
arXiv Detail & Related papers (2022-05-22T15:13:23Z) - MPAF: Model Poisoning Attacks to Federated Learning based on Fake
Clients [51.973224448076614]
We propose the first Model Poisoning Attack based on Fake clients called MPAF.
MPAF can significantly decrease the test accuracy of the global model, even if classical defenses and norm clipping are adopted.
arXiv Detail & Related papers (2022-03-16T14:59:40Z) - Learning to Detect Malicious Clients for Robust Federated Learning [20.5238037608738]
Federated learning systems are vulnerable to attacks from malicious clients.
We propose a new framework for robust federated learning where the central server learns to detect and remove the malicious model updates.
arXiv Detail & Related papers (2020-02-01T14:09:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.