Gradient Purification: Defense Against Poisoning Attack in Decentralized Federated Learning
- URL: http://arxiv.org/abs/2501.04453v2
- Date: Wed, 26 Feb 2025 15:12:42 GMT
- Title: Gradient Purification: Defense Against Poisoning Attack in Decentralized Federated Learning
- Authors: Bin Li, Xiaoye Miao, Yan Zhang, Jianwei Yin,
- Abstract summary: gradient purification defense, named GPD, integrates seamlessly with existing DFL aggregation to defend against poisoning attacks.<n>It aims to mitigate the harm in model gradients while retaining the benefit in model weights for enhancing accuracy.<n>It significantly outperforms state-of-the-art defenses in terms of accuracy against various poisoning attacks.
- Score: 21.99122382358776
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Decentralized federated learning (DFL) is inherently vulnerable to poisoning attacks, as malicious clients can transmit manipulated model gradients to neighboring clients. Existing defense methods either reject suspicious gradients per iteration or restart DFL aggregation after detecting all malicious clients. They overlook the potential accuracy benefit from the discarded malicious gradients. In this paper, we propose a novel gradient purification defense, named GPD, that integrates seamlessly with existing DFL aggregation to defend against poisoning attacks. It aims to mitigate the harm in model gradients while retaining the benefit in model weights for enhancing accuracy. For each benign client in GPD, a recording variable is designed to track the historically aggregated gradients from one of its neighbors. It allows benign clients to precisely detect malicious neighbors and swiftly mitigate aggregated malicious gradients via historical consistency checks. Upon mitigation, GPD optimizes model weights via aggregating gradients solely from benign clients. This retains the previously beneficial portions from malicious clients and exploits the contributions from benign clients, thereby significantly enhancing the model accuracy. We analyze the convergence of GPD, as well as its ability to harvest high accuracy. Extensive experiments over three datasets demonstrate that, GPD is capable of mitigating poisoning attacks under both iid and non-iid data distributions. It significantly outperforms state-of-the-art defenses in terms of accuracy against various poisoning attacks.
Related papers
- SMTFL: Secure Model Training to Untrusted Participants in Federated Learning [8.225656436115509]
Federated learning is an essential distributed model training technique.
gradient inversion attacks and poisoning attacks pose significant risks to the privacy of training data and the model correctness.
We propose a novel approach called SMTFL to achieve secure model training in federated learning without relying on trusted participants.
arXiv Detail & Related papers (2025-02-04T06:12:43Z) - CENSOR: Defense Against Gradient Inversion via Orthogonal Subspace Bayesian Sampling [63.07948989346385]
Federated learning collaboratively trains a neural network on a global server.
Each local client receives the current global model weights and sends back parameter updates (gradients) based on its local private data.
Existing gradient inversion attacks can exploit this vulnerability to recover private training instances from a client's gradient vectors.
We present a novel defense tailored for large neural network models.
arXiv Detail & Related papers (2025-01-27T01:06:23Z) - RECESS Vaccine for Federated Learning: Proactive Defense Against Model Poisoning Attacks [20.55681622921858]
Model poisoning attacks greatly jeopardize the application of federated learning (FL)
In this work, we propose a novel proactive defense named RECESS against model poisoning attacks.
Unlike previous methods that score each iteration, RECESS considers clients' performance correlation across multiple iterations to estimate the trust score.
arXiv Detail & Related papers (2023-10-09T06:09:01Z) - Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z) - G$^2$uardFL: Safeguarding Federated Learning Against Backdoor Attacks
through Attributed Client Graph Clustering [116.4277292854053]
Federated Learning (FL) offers collaborative model training without data sharing.
FL is vulnerable to backdoor attacks, where poisoned model weights lead to compromised system integrity.
We present G$2$uardFL, a protective framework that reinterprets the identification of malicious clients as an attributed graph clustering problem.
arXiv Detail & Related papers (2023-06-08T07:15:04Z) - Diffusion-Based Adversarial Sample Generation for Improved Stealthiness
and Controllability [62.105715985563656]
We propose a novel framework dubbed Diffusion-Based Projected Gradient Descent (Diff-PGD) for generating realistic adversarial samples.
Our framework can be easily customized for specific tasks such as digital attacks, physical-world attacks, and style-based attacks.
arXiv Detail & Related papers (2023-05-25T21:51:23Z) - Detection and Prevention Against Poisoning Attacks in Federated Learning [0.0]
This paper proposes and investigates a new approach for detecting and preventing several different types of poisoning attacks.
By comparing each client's accuracy to all clients' average accuracy, AADD detect clients with an accuracy deviation.
The proposed implementation shows promising results in detecting poisoned clients and preventing the global model's accuracy from deteriorating.
arXiv Detail & Related papers (2022-10-24T11:28:01Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - CrowdGuard: Federated Backdoor Detection in Federated Learning [39.58317527488534]
This paper presents a novel defense mechanism, CrowdGuard, that effectively mitigates backdoor attacks in Federated Learning.
CrowdGuard employs a server-located stacked clustering scheme to enhance its resilience to rogue client feedback.
The evaluation results demonstrate that CrowdGuard achieves a 100% True-Positive-Rate and True-Negative-Rate across various scenarios.
arXiv Detail & Related papers (2022-10-14T11:27:49Z) - BEAS: Blockchain Enabled Asynchronous & Secure Federated Machine
Learning [0.0]
We present BEAS, the first blockchain-based framework for N-party Federated Learning.
It provides strict privacy guarantees of training data using gradient pruning.
Anomaly detection protocols are used to minimize the risk of data-poisoning attacks.
We also define a novel protocol to prevent premature convergence in heterogeneous learning environments.
arXiv Detail & Related papers (2022-02-06T17:11:14Z) - Improved Certified Defenses against Data Poisoning with (Deterministic)
Finite Aggregation [122.83280749890078]
We propose an improved certified defense against general poisoning attacks, namely Finite Aggregation.
In contrast to DPA, which directly splits the training set into disjoint subsets, our method first splits the training set into smaller disjoint subsets.
We offer an alternative view of our method, bridging the designs of deterministic and aggregation-based certified defenses.
arXiv Detail & Related papers (2022-02-05T20:08:58Z) - Byzantine-robust Federated Learning through Collaborative Malicious
Gradient Filtering [32.904425716385575]
We show that element-wise sign of gradient vector can provide valuable insight in detecting model poisoning attacks.
We propose a novel approach called textitSignGuard to enable Byzantine-robust federated learning through collaborative malicious gradient filtering.
arXiv Detail & Related papers (2021-09-13T11:15:15Z) - How Robust are Randomized Smoothing based Defenses to Data Poisoning? [66.80663779176979]
We present a previously unrecognized threat to robust machine learning models that highlights the importance of training-data quality.
We propose a novel bilevel optimization-based data poisoning attack that degrades the robustness guarantees of certifiably robust classifiers.
Our attack is effective even when the victim trains the models from scratch using state-of-the-art robust training methods.
arXiv Detail & Related papers (2020-12-02T15:30:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.