FL-WBC: Enhancing Robustness against Model Poisoning Attacks in
Federated Learning from a Client Perspective
- URL: http://arxiv.org/abs/2110.13864v1
- Date: Tue, 26 Oct 2021 17:13:35 GMT
- Title: FL-WBC: Enhancing Robustness against Model Poisoning Attacks in
Federated Learning from a Client Perspective
- Authors: Jingwei Sun, Ang Li, Louis DiValentin, Amin Hassanzadeh, Yiran Chen,
Hai Li
- Abstract summary: Federated learning (FL) is a popular distributed learning framework that trains a global model through iterative communications between a central server and edge devices.
Recent works have demonstrated that FL is vulnerable to model poisoning attacks.
We propose a client-based defense, named White Blood Cell for Federated Learning (FL-WBC), which can mitigate model poisoning attacks.
- Score: 35.10520095377653
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) is a popular distributed learning framework that
trains a global model through iterative communications between a central server
and edge devices. Recent works have demonstrated that FL is vulnerable to model
poisoning attacks. Several server-based defense approaches (e.g. robust
aggregation), have been proposed to mitigate such attacks. However, we
empirically show that under extremely strong attacks, these defensive methods
fail to guarantee the robustness of FL. More importantly, we observe that as
long as the global model is polluted, the impact of attacks on the global model
will remain in subsequent rounds even if there are no subsequent attacks. In
this work, we propose a client-based defense, named White Blood Cell for
Federated Learning (FL-WBC), which can mitigate model poisoning attacks that
have already polluted the global model. The key idea of FL-WBC is to identify
the parameter space where long-lasting attack effect on parameters resides and
perturb that space during local training. Furthermore, we derive a certified
robustness guarantee against model poisoning attacks and a convergence
guarantee to FedAvg after applying our FL-WBC. We conduct experiments on
FasionMNIST and CIFAR10 to evaluate the defense against state-of-the-art model
poisoning attacks. The results demonstrate that our method can effectively
mitigate model poisoning attack impact on the global model within 5
communication rounds with nearly no accuracy drop under both IID and Non-IID
settings. Our defense is also complementary to existing server-based robust
aggregation approaches and can further improve the robustness of FL under
extremely strong attacks.
Related papers
- Mitigating Malicious Attacks in Federated Learning via Confidence-aware Defense [3.685395311534351]
Federated Learning (FL) is a distributed machine learning diagram that enables multiple clients to collaboratively train a global model without sharing their private local data.
FL systems are vulnerable to attacks that are happening in malicious clients through data poisoning and model poisoning.
Existing defense methods typically focus on mitigating specific types of poisoning and are often ineffective against unseen types of attack.
arXiv Detail & Related papers (2024-08-05T20:27:45Z) - Poisoning with A Pill: Circumventing Detection in Federated Learning [33.915489514978084]
This paper proposes a generic and attack-agnostic augmentation approach designed to enhance the effectiveness and stealthiness of existing FL poisoning attacks against detection in FL.
Specifically, we employ a three-stage methodology that strategically constructs, generates, and injects poison into a pill during the FL training, named as pill construction, pill poisoning, and pill injection accordingly.
arXiv Detail & Related papers (2024-07-22T05:34:47Z) - Model Poisoning Attacks to Federated Learning via Multi-Round Consistency [42.132028389365075]
We propose PoisonedFL, which enforces multi-round consistency among the malicious clients' model updates.
Our empirical evaluation on five benchmark datasets shows that PoisonedFL breaks eight state-of-the-art defenses and outperforms seven existing model poisoning attacks.
arXiv Detail & Related papers (2024-04-24T03:02:21Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - Data-Agnostic Model Poisoning against Federated Learning: A Graph
Autoencoder Approach [65.2993866461477]
This paper proposes a data-agnostic, model poisoning attack on Federated Learning (FL)
The attack requires no knowledge of FL training data and achieves both effectiveness and undetectability.
Experiments show that the FL accuracy drops gradually under the proposed attack and existing defense mechanisms fail to detect it.
arXiv Detail & Related papers (2023-11-30T12:19:10Z) - SPFL: A Self-purified Federated Learning Method Against Poisoning Attacks [12.580891810557482]
Federated learning (FL) is attractive for pulling privacy-preserving distributed training data.
We propose a self-purified FL (SPFL) method that enables benign clients to exploit trusted historical features of locally purified model.
We experimentally demonstrate that SPFL outperforms state-of-the-art FL defenses against various poisoning attacks.
arXiv Detail & Related papers (2023-09-19T13:31:33Z) - Towards Attack-tolerant Federated Learning via Critical Parameter
Analysis [85.41873993551332]
Federated learning systems are susceptible to poisoning attacks when malicious clients send false updates to the central server.
This paper proposes a new defense strategy, FedCPA (Federated learning with Critical Analysis)
Our attack-tolerant aggregation method is based on the observation that benign local models have similar sets of top-k and bottom-k critical parameters, whereas poisoned local models do not.
arXiv Detail & Related papers (2023-08-18T05:37:55Z) - Mitigating Cross-client GANs-based Attack in Federated Learning [78.06700142712353]
Multi distributed multimedia clients can resort to federated learning (FL) to jointly learn a global shared model.
FL suffers from the cross-client generative adversarial networks (GANs)-based (C-GANs) attack.
We propose Fed-EDKD technique to improve the current popular FL schemes to resist C-GANs attack.
arXiv Detail & Related papers (2023-07-25T08:15:55Z) - FedDefender: Client-Side Attack-Tolerant Federated Learning [60.576073964874]
Federated learning enables learning from decentralized data sources without compromising privacy.
It is vulnerable to model poisoning attacks, where malicious clients interfere with the training process.
We propose a new defense mechanism that focuses on the client-side, called FedDefender, to help benign clients train robust local models.
arXiv Detail & Related papers (2023-07-18T08:00:41Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.