Fabricated Flips: Poisoning Federated Learning without Data
- URL: http://arxiv.org/abs/2202.05877v2
- Date: Wed, 2 Aug 2023 16:27:26 GMT
- Title: Fabricated Flips: Poisoning Federated Learning without Data
- Authors: Jiyue Huang, Zilong Zhao, Lydia Y. Chen, Stefanie Roos
- Abstract summary: Attacks on Federated Learning (FL) can severely reduce the quality of the generated models.
We propose a data-free untargeted attack (DFA) that synthesizes malicious data to craft adversarial models.
DFA achieves similar or even higher attack success rate than state-of-the-art untargeted attacks.
- Score: 9.060263645085564
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Attacks on Federated Learning (FL) can severely reduce the quality of the
generated models and limit the usefulness of this emerging learning paradigm
that enables on-premise decentralized learning. However, existing untargeted
attacks are not practical for many scenarios as they assume that i) the
attacker knows every update of benign clients, or ii) the attacker has a large
dataset to locally train updates imitating benign parties. In this paper, we
propose a data-free untargeted attack (DFA) that synthesizes malicious data to
craft adversarial models without eavesdropping on the transmission of benign
clients at all or requiring a large quantity of task-specific training data. We
design two variants of DFA, namely DFA-R and DFA-G, which differ in how they
trade off stealthiness and effectiveness. Specifically, DFA-R iteratively
optimizes a malicious data layer to minimize the prediction confidence of all
outputs of the global model, whereas DFA-G interactively trains a malicious
data generator network by steering the output of the global model toward a
particular class. Experimental results on Fashion-MNIST, Cifar-10, and SVHN
show that DFA, despite requiring fewer assumptions than existing attacks,
achieves similar or even higher attack success rate than state-of-the-art
untargeted attacks against various state-of-the-art defense mechanisms.
Concretely, they can evade all considered defense mechanisms in at least 50% of
the cases for CIFAR-10 and often reduce the accuracy by more than a factor of
2. Consequently, we design REFD, a defense specifically crafted to protect
against data-free attacks. REFD leverages a reference dataset to detect updates
that are biased or have a low confidence. It greatly improves upon existing
defenses by filtering out the malicious updates and achieves high global model
accuracy
Related papers
- Mitigating Malicious Attacks in Federated Learning via Confidence-aware Defense [3.685395311534351]
Federated Learning (FL) is a distributed machine learning diagram that enables multiple clients to collaboratively train a global model without sharing their private local data.
FL systems are vulnerable to attacks that are happening in malicious clients through data poisoning and model poisoning.
Existing defense methods typically focus on mitigating specific types of poisoning and are often ineffective against unseen types of attack.
arXiv Detail & Related papers (2024-08-05T20:27:45Z) - Defending Against Sophisticated Poisoning Attacks with RL-based Aggregation in Federated Learning [12.352511156767338]
Federated learning is highly susceptible to model poisoning attacks.
In this paper, we propose AdaAggRL, an RL-based Adaptive aggregation method.
Experiments on four real-world datasets demonstrate that the proposed defense model significantly outperforms widely adopted defense models for sophisticated attacks.
arXiv Detail & Related papers (2024-06-20T11:33:14Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - Learn from the Past: A Proxy Guided Adversarial Defense Framework with
Self Distillation Regularization [53.04697800214848]
Adversarial Training (AT) is pivotal in fortifying the robustness of deep learning models.
AT methods, relying on direct iterative updates for target model's defense, frequently encounter obstacles such as unstable training and catastrophic overfitting.
We present a general proxy guided defense framework, LAST' (bf Learn from the Pbf ast)
arXiv Detail & Related papers (2023-10-19T13:13:41Z) - FedDefender: Client-Side Attack-Tolerant Federated Learning [60.576073964874]
Federated learning enables learning from decentralized data sources without compromising privacy.
It is vulnerable to model poisoning attacks, where malicious clients interfere with the training process.
We propose a new defense mechanism that focuses on the client-side, called FedDefender, to help benign clients train robust local models.
arXiv Detail & Related papers (2023-07-18T08:00:41Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - RelaxLoss: Defending Membership Inference Attacks without Losing Utility [68.48117818874155]
We propose a novel training framework based on a relaxed loss with a more achievable learning target.
RelaxLoss is applicable to any classification model with added benefits of easy implementation and negligible overhead.
Our approach consistently outperforms state-of-the-art defense mechanisms in terms of resilience against MIAs.
arXiv Detail & Related papers (2022-07-12T19:34:47Z) - Defending against the Label-flipping Attack in Federated Learning [5.769445676575767]
Federated learning (FL) provides autonomy and privacy by design to participating peers.
The label-flipping (LF) attack is a targeted poisoning attack where the attackers poison their training data by flipping the labels of some examples.
We propose a novel defense that first dynamically extracts those gradients from the peers' local updates.
arXiv Detail & Related papers (2022-07-05T12:02:54Z) - FL-Defender: Combating Targeted Attacks in Federated Learning [7.152674461313707]
Federated learning (FL) enables learning a global machine learning model from local data distributed among a set of participating workers.
FL is vulnerable to targeted poisoning attacks that negatively impact the integrity of the learned model.
We propose textitFL-Defender as a method to combat FL targeted attacks.
arXiv Detail & Related papers (2022-07-02T16:04:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.