Gradient Purification: Defense Against Poisoning Attack in Decentralized Federated Learning
- URL: http://arxiv.org/abs/2501.04453v1
- Date: Wed, 08 Jan 2025 12:14:00 GMT
- Title: Gradient Purification: Defense Against Poisoning Attack in Decentralized Federated Learning
- Authors: Bin Li, Xiaoye Miao, Yongheng Shang, Xinkui Zhao, Shuiguang Deng, Jianwei Yin,
- Abstract summary: gradient purification defense, named GPD, integrates seamlessly with existing DFL aggregation to defend against poisoning attacks.
It aims to mitigate the harm in model gradients while retaining the benefit in model weights for enhancing accuracy.
It significantly outperforms state-of-the-art defenses in terms of accuracy against various poisoning attacks.
- Score: 21.892850886276317
- License:
- Abstract: Decentralized federated learning (DFL) is inherently vulnerable to poisoning attacks, as malicious clients can transmit manipulated model gradients to neighboring clients. Existing defense methods either reject suspicious gradients per iteration or restart DFL aggregation after detecting all malicious clients. They overlook the potential accuracy benefit from the discarded malicious gradients. In this paper, we propose a novel gradient purification defense, named GPD, that integrates seamlessly with existing DFL aggregation to defend against poisoning attacks. It aims to mitigate the harm in model gradients while retaining the benefit in model weights for enhancing accuracy. For each benign client in GPD, a recording variable is designed to track the historically aggregated gradients from one of its neighbors. It allows benign clients to precisely detect malicious neighbors and swiftly mitigate aggregated malicious gradients via historical consistency checks. Upon mitigation, GPD optimizes model weights via aggregating gradients solely from benign clients. This retains the previously beneficial portions from malicious clients and exploits the contributions from benign clients, thereby significantly enhancing the model accuracy. We analyze the convergence of GPD, as well as its ability to harvest high accuracy. Extensive experiments over three datasets demonstrate that, GPD is capable of mitigating poisoning attacks under both iid and non-iid data distributions. It significantly outperforms state-of-the-art defenses in terms of accuracy against various poisoning attacks.
Related papers
- The Gradient Puppeteer: Adversarial Domination in Gradient Leakage Attacks through Model Poisoning [14.424323591908939]
In Federated Learning (FL), clients share gradients with a central server while keeping their data local.
malicious servers could deliberately manipulate the models to reconstruct clients' data from shared gradients, posing significant privacy risks.
We introduce a new theoretical analysis approach, which uniformly models AGLAs as backdoor poisoning.
We propose Enhanced Gradient Global Vulnerability (EGGV), the first AGLA that achieves complete attack coverage while evading client-side detection.
arXiv Detail & Related papers (2025-02-06T14:31:14Z) - SMTFL: Secure Model Training to Untrusted Participants in Federated Learning [8.225656436115509]
Federated learning is an essential distributed model training technique.
gradient inversion attacks and poisoning attacks pose significant risks to the privacy of training data and the model correctness.
We propose a novel approach called SMTFL to achieve secure model training in federated learning without relying on trusted participants.
arXiv Detail & Related papers (2025-02-04T06:12:43Z) - CENSOR: Defense Against Gradient Inversion via Orthogonal Subspace Bayesian Sampling [63.07948989346385]
Federated learning collaboratively trains a neural network on a global server.
Each local client receives the current global model weights and sends back parameter updates (gradients) based on its local private data.
Existing gradient inversion attacks can exploit this vulnerability to recover private training instances from a client's gradient vectors.
We present a novel defense tailored for large neural network models.
arXiv Detail & Related papers (2025-01-27T01:06:23Z) - RECESS Vaccine for Federated Learning: Proactive Defense Against Model Poisoning Attacks [20.55681622921858]
Model poisoning attacks greatly jeopardize the application of federated learning (FL)
In this work, we propose a novel proactive defense named RECESS against model poisoning attacks.
Unlike previous methods that score each iteration, RECESS considers clients' performance correlation across multiple iterations to estimate the trust score.
arXiv Detail & Related papers (2023-10-09T06:09:01Z) - Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z) - Diffusion-Based Adversarial Sample Generation for Improved Stealthiness
and Controllability [62.105715985563656]
We propose a novel framework dubbed Diffusion-Based Projected Gradient Descent (Diff-PGD) for generating realistic adversarial samples.
Our framework can be easily customized for specific tasks such as digital attacks, physical-world attacks, and style-based attacks.
arXiv Detail & Related papers (2023-05-25T21:51:23Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - Improved Certified Defenses against Data Poisoning with (Deterministic)
Finite Aggregation [122.83280749890078]
We propose an improved certified defense against general poisoning attacks, namely Finite Aggregation.
In contrast to DPA, which directly splits the training set into disjoint subsets, our method first splits the training set into smaller disjoint subsets.
We offer an alternative view of our method, bridging the designs of deterministic and aggregation-based certified defenses.
arXiv Detail & Related papers (2022-02-05T20:08:58Z) - Byzantine-robust Federated Learning through Collaborative Malicious
Gradient Filtering [32.904425716385575]
We show that element-wise sign of gradient vector can provide valuable insight in detecting model poisoning attacks.
We propose a novel approach called textitSignGuard to enable Byzantine-robust federated learning through collaborative malicious gradient filtering.
arXiv Detail & Related papers (2021-09-13T11:15:15Z) - How Robust are Randomized Smoothing based Defenses to Data Poisoning? [66.80663779176979]
We present a previously unrecognized threat to robust machine learning models that highlights the importance of training-data quality.
We propose a novel bilevel optimization-based data poisoning attack that degrades the robustness guarantees of certifiably robust classifiers.
Our attack is effective even when the victim trains the models from scratch using state-of-the-art robust training methods.
arXiv Detail & Related papers (2020-12-02T15:30:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.