MPAF: Model Poisoning Attacks to Federated Learning based on Fake
Clients
- URL: http://arxiv.org/abs/2203.08669v1
- Date: Wed, 16 Mar 2022 14:59:40 GMT
- Title: MPAF: Model Poisoning Attacks to Federated Learning based on Fake
Clients
- Authors: Xiaoyu Cao and Neil Zhenqiang Gong
- Abstract summary: We propose the first Model Poisoning Attack based on Fake clients called MPAF.
MPAF can significantly decrease the test accuracy of the global model, even if classical defenses and norm clipping are adopted.
- Score: 51.973224448076614
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing model poisoning attacks to federated learning assume that an
attacker has access to a large fraction of compromised genuine clients.
However, such assumption is not realistic in production federated learning
systems that involve millions of clients. In this work, we propose the first
Model Poisoning Attack based on Fake clients called MPAF. Specifically, we
assume the attacker injects fake clients to a federated learning system and
sends carefully crafted fake local model updates to the cloud server during
training, such that the learnt global model has low accuracy for many
indiscriminate test inputs. Towards this goal, our attack drags the global
model towards an attacker-chosen base model that has low accuracy.
Specifically, in each round of federated learning, the fake clients craft fake
local model updates that point to the base model and scale them up to amplify
their impact before sending them to the cloud server. Our experiments show that
MPAF can significantly decrease the test accuracy of the global model, even if
classical defenses and norm clipping are adopted, highlighting the need for
more advanced defenses.
Related papers
- Model Hijacking Attack in Federated Learning [19.304332176437363]
HijackFL is the first-of-its-kind hijacking attack against the global model in federated learning.
It aims to force the global model to perform a different task from its original task without the server or benign client noticing.
We conduct extensive experiments on four benchmark datasets and three popular models.
arXiv Detail & Related papers (2024-08-04T20:02:07Z) - Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z) - FedDefender: Client-Side Attack-Tolerant Federated Learning [60.576073964874]
Federated learning enables learning from decentralized data sources without compromising privacy.
It is vulnerable to model poisoning attacks, where malicious clients interfere with the training process.
We propose a new defense mechanism that focuses on the client-side, called FedDefender, to help benign clients train robust local models.
arXiv Detail & Related papers (2023-07-18T08:00:41Z) - FedRecover: Recovering from Poisoning Attacks in Federated Learning
using Historical Information [67.8846134295194]
Federated learning is vulnerable to poisoning attacks in which malicious clients poison the global model.
We propose FedRecover, which can recover an accurate global model from poisoning attacks with small cost for the clients.
arXiv Detail & Related papers (2022-10-20T00:12:34Z) - FLCert: Provably Secure Federated Learning against Poisoning Attacks [67.8846134295194]
We propose FLCert, an ensemble federated learning framework that is provably secure against poisoning attacks.
Our experiments show that the label predicted by our FLCert for a test input is provably unaffected by a bounded number of malicious clients.
arXiv Detail & Related papers (2022-10-02T17:50:04Z) - FLDetector: Defending Federated Learning Against Model Poisoning Attacks
via Detecting Malicious Clients [39.88152764752553]
Federated learning (FL) is vulnerable to model poisoning attacks.
Malicious clients corrupt the global model via sending manipulated model updates to the server.
Our FLDetector aims to detect and remove the majority of the malicious clients.
arXiv Detail & Related papers (2022-07-19T11:44:24Z) - Robust Quantity-Aware Aggregation for Federated Learning [72.59915691824624]
Malicious clients can poison model updates and claim large quantities to amplify the impact of their model updates in the model aggregation.
Existing defense methods for FL, while all handling malicious model updates, either treat all quantities benign or simply ignore/truncate the quantities of all clients.
We propose a robust quantity-aware aggregation algorithm for federated learning, called FedRA, to perform the aggregation with awareness of local data quantities.
arXiv Detail & Related papers (2022-05-22T15:13:23Z) - TESSERACT: Gradient Flip Score to Secure Federated Learning Against
Model Poisoning Attacks [25.549815759093068]
Federated learning is vulnerable to model poisoning attacks.
This is because malicious clients can collude to make the global model inaccurate.
We develop TESSERACT, a defense against this directed deviation attack.
arXiv Detail & Related papers (2021-10-19T17:03:29Z) - Learning to Detect Malicious Clients for Robust Federated Learning [20.5238037608738]
Federated learning systems are vulnerable to attacks from malicious clients.
We propose a new framework for robust federated learning where the central server learns to detect and remove the malicious model updates.
arXiv Detail & Related papers (2020-02-01T14:09:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.