G$^2$uardFL: Safeguarding Federated Learning Against Backdoor Attacks
through Attributed Client Graph Clustering
- URL: http://arxiv.org/abs/2306.04984v2
- Date: Fri, 8 Dec 2023 02:36:02 GMT
- Title: G$^2$uardFL: Safeguarding Federated Learning Against Backdoor Attacks
through Attributed Client Graph Clustering
- Authors: Hao Yu, Chuan Ma, Meng Liu, Tianyu Du, Ming Ding, Tao Xiang, Shouling
Ji, Xinwang Liu
- Abstract summary: Federated Learning (FL) offers collaborative model training without data sharing.
FL is vulnerable to backdoor attacks, where poisoned model weights lead to compromised system integrity.
We present G$2$uardFL, a protective framework that reinterprets the identification of malicious clients as an attributed graph clustering problem.
- Score: 116.4277292854053
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) offers collaborative model training without data
sharing but is vulnerable to backdoor attacks, where poisoned model weights
lead to compromised system integrity. Existing countermeasures, primarily based
on anomaly detection, are prone to erroneous rejections of normal weights while
accepting poisoned ones, largely due to shortcomings in quantifying
similarities among client models. Furthermore, other defenses demonstrate
effectiveness only when dealing with a limited number of malicious clients,
typically fewer than 10%. To alleviate these vulnerabilities, we present
G$^2$uardFL, a protective framework that reinterprets the identification of
malicious clients as an attributed graph clustering problem, thus safeguarding
FL systems. Specifically, this framework employs a client graph clustering
approach to identify malicious clients and integrates an adaptive mechanism to
amplify the discrepancy between the aggregated model and the poisoned ones,
effectively eliminating embedded backdoors. We also conduct a theoretical
analysis of convergence to confirm that G$^2$uardFL does not affect the
convergence of FL systems. Through empirical evaluation, comparing G$^2$uardFL
with cutting-edge defenses, such as FLAME (USENIX Security 2022) [28] and
DeepSight (NDSS 2022) [36], against various backdoor attacks including 3DFed
(SP 2023) [20], our results demonstrate its significant effectiveness in
mitigating backdoor attacks while having a negligible impact on the aggregated
model's performance on benign samples (i.e., the primary task performance). For
instance, in an FL system with 25% malicious clients, G$^2$uardFL reduces the
attack success rate to 10.61%, while maintaining a primary task performance of
73.05% on the CIFAR-10 dataset. This surpasses the performance of the
best-performing baseline, which merely achieves a primary task performance of
19.54%.
Related papers
- FaultGuard: A Generative Approach to Resilient Fault Prediction in Smart Electrical Grids [53.2306792009435]
FaultGuard is the first framework for fault type and zone classification resilient to adversarial attacks.
We propose a low-complexity fault prediction model and an online adversarial training technique to enhance robustness.
Our model outclasses the state-of-the-art for resilient fault prediction benchmarking, with an accuracy of up to 0.958.
arXiv Detail & Related papers (2024-03-26T08:51:23Z) - Resilience in Online Federated Learning: Mitigating Model-Poisoning Attacks via Partial Sharing [6.957420925496431]
Federated learning (FL) allows training machine learning models on distributed data without compromising privacy.
FL is vulnerable to model-poisoning attacks where malicious clients tamper with their local models to manipulate the global model.
In this work, we investigate the resilience of the partial-sharing online FL (PSO-Fed) algorithm against such attacks.
arXiv Detail & Related papers (2024-03-19T19:15:38Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z) - FedDefender: Backdoor Attack Defense in Federated Learning [0.0]
Federated Learning (FL) is a privacy-preserving distributed machine learning technique.
We propose FedDefender, a defense mechanism against targeted poisoning attacks in FL.
arXiv Detail & Related papers (2023-07-02T03:40:04Z) - Fedward: Flexible Federated Backdoor Defense Framework with Non-IID Data [14.160225621129076]
adversaries can manipulate datasets and upload models by injecting triggers for federated backdoor attacks.
Existing defense strategies consider specific and limited attacker models, and a sufficient amount of noise to be injected only mitigates.
We introduce a Flexible Federated Backdoor Defense Framework (Fedward) to ensure the elimination of backdoors.
arXiv Detail & Related papers (2023-07-01T15:01:03Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - FedGrad: Mitigating Backdoor Attacks in Federated Learning Through Local
Ultimate Gradients Inspection [3.3711670942444014]
Federated learning (FL) enables multiple clients to train a model without compromising sensitive data.
The decentralized nature of FL makes it susceptible to adversarial attacks, especially backdoor insertion during training.
We propose FedGrad, a backdoor-resistant defense for FL that is resistant to cutting-edge backdoor attacks.
arXiv Detail & Related papers (2023-04-29T19:31:44Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.