SPFL: A Self-purified Federated Learning Method Against Poisoning Attacks
- URL: http://arxiv.org/abs/2309.10607v1
- Date: Tue, 19 Sep 2023 13:31:33 GMT
- Title: SPFL: A Self-purified Federated Learning Method Against Poisoning Attacks
- Authors: Zizhen Liu, Weiyang He, Chip-Hong Chang, Jing Ye, Huawei Li, Xiaowei Li,
- Abstract summary: Federated learning (FL) is attractive for pulling privacy-preserving distributed training data.
We propose a self-purified FL (SPFL) method that enables benign clients to exploit trusted historical features of locally purified model.
We experimentally demonstrate that SPFL outperforms state-of-the-art FL defenses against various poisoning attacks.
- Score: 12.580891810557482
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While Federated learning (FL) is attractive for pulling privacy-preserving distributed training data, the credibility of participating clients and non-inspectable data pose new security threats, of which poisoning attacks are particularly rampant and hard to defend without compromising privacy, performance or other desirable properties of FL. To tackle this problem, we propose a self-purified FL (SPFL) method that enables benign clients to exploit trusted historical features of locally purified model to supervise the training of aggregated model in each iteration. The purification is performed by an attention-guided self-knowledge distillation where the teacher and student models are optimized locally for task loss, distillation loss and attention-based loss simultaneously. SPFL imposes no restriction on the communication protocol and aggregator at the server. It can work in tandem with any existing secure aggregation algorithms and protocols for augmented security and privacy guarantee. We experimentally demonstrate that SPFL outperforms state-of-the-art FL defenses against various poisoning attacks. The attack success rate of SPFL trained model is at most 3$\%$ above that of a clean model, even if the poisoning attack is launched in every iteration with all but one malicious clients in the system. Meantime, it improves the model quality on normal inputs compared to FedAvg, either under attack or in the absence of an attack.
Related papers
- Celtibero: Robust Layered Aggregation for Federated Learning [0.0]
We introduce Celtibero, a novel defense mechanism that integrates layered aggregation to enhance robustness against adversarial manipulation.
We demonstrate that Celtibero consistently achieves high main task accuracy (MTA) while maintaining minimal attack success rates (ASR) across a range of untargeted and targeted poisoning attacks.
arXiv Detail & Related papers (2024-08-26T12:54:00Z) - FLGuard: Byzantine-Robust Federated Learning via Ensemble of Contrastive
Models [2.7539214125526534]
Federated Learning (FL) thrives in training a global model with numerous clients.
Recent research proposed poisoning attacks that cause a catastrophic loss in the accuracy of the global model.
We propose FLGuard, a novel byzantine-robust FL method that detects malicious clients and discards malicious local updates.
arXiv Detail & Related papers (2024-03-05T10:36:27Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - Data-Agnostic Model Poisoning against Federated Learning: A Graph
Autoencoder Approach [65.2993866461477]
This paper proposes a data-agnostic, model poisoning attack on Federated Learning (FL)
The attack requires no knowledge of FL training data and achieves both effectiveness and undetectability.
Experiments show that the FL accuracy drops gradually under the proposed attack and existing defense mechanisms fail to detect it.
arXiv Detail & Related papers (2023-11-30T12:19:10Z) - Mitigating Cross-client GANs-based Attack in Federated Learning [78.06700142712353]
Multi distributed multimedia clients can resort to federated learning (FL) to jointly learn a global shared model.
FL suffers from the cross-client generative adversarial networks (GANs)-based (C-GANs) attack.
We propose Fed-EDKD technique to improve the current popular FL schemes to resist C-GANs attack.
arXiv Detail & Related papers (2023-07-25T08:15:55Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - FL-Defender: Combating Targeted Attacks in Federated Learning [7.152674461313707]
Federated learning (FL) enables learning a global machine learning model from local data distributed among a set of participating workers.
FL is vulnerable to targeted poisoning attacks that negatively impact the integrity of the learned model.
We propose textitFL-Defender as a method to combat FL targeted attacks.
arXiv Detail & Related papers (2022-07-02T16:04:46Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z) - FL-WBC: Enhancing Robustness against Model Poisoning Attacks in
Federated Learning from a Client Perspective [35.10520095377653]
Federated learning (FL) is a popular distributed learning framework that trains a global model through iterative communications between a central server and edge devices.
Recent works have demonstrated that FL is vulnerable to model poisoning attacks.
We propose a client-based defense, named White Blood Cell for Federated Learning (FL-WBC), which can mitigate model poisoning attacks.
arXiv Detail & Related papers (2021-10-26T17:13:35Z) - Untargeted Poisoning Attack Detection in Federated Learning via Behavior
Attestation [7.979659145328856]
Federated Learning (FL) is a paradigm in Machine Learning (ML) that addresses data privacy, security, access rights and access to heterogeneous information issues.
Despite its advantages, there is an increased potential for cyberattacks on FL-based ML techniques that can undermine the benefits.
We propose attestedFL, a defense mechanism that monitors the training of individual nodes through state persistence in order to detect a malicious worker.
arXiv Detail & Related papers (2021-01-24T20:52:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.