CrowdGuard: Federated Backdoor Detection in Federated Learning
- URL: http://arxiv.org/abs/2210.07714v3
- Date: Tue, 22 Aug 2023 10:26:00 GMT
- Title: CrowdGuard: Federated Backdoor Detection in Federated Learning
- Authors: Phillip Rieger (1), Torsten Krau{\ss} (2), Markus Miettinen (1),
Alexandra Dmitrienko (2), Ahmad-Reza Sadeghi (1) ((1) Technical University
Darmstadt, (2) University of W\"urzburg)
- Abstract summary: This paper presents a novel defense mechanism, CrowdGuard, that effectively mitigates backdoor attacks in Federated Learning.
CrowdGuard employs a server-located stacked clustering scheme to enhance its resilience to rogue client feedback.
The evaluation results demonstrate that CrowdGuard achieves a 100% True-Positive-Rate and True-Negative-Rate across various scenarios.
- Score: 39.58317527488534
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) is a promising approach enabling multiple clients to
train Deep Neural Networks (DNNs) collaboratively without sharing their local
training data. However, FL is susceptible to backdoor (or targeted poisoning)
attacks. These attacks are initiated by malicious clients who seek to
compromise the learning process by introducing specific behaviors into the
learned model that can be triggered by carefully crafted inputs. Existing FL
safeguards have various limitations: They are restricted to specific data
distributions or reduce the global model accuracy due to excluding benign
models or adding noise, are vulnerable to adaptive defense-aware adversaries,
or require the server to access local models, allowing data inference attacks.
This paper presents a novel defense mechanism, CrowdGuard, that effectively
mitigates backdoor attacks in FL and overcomes the deficiencies of existing
techniques. It leverages clients' feedback on individual models, analyzes the
behavior of neurons in hidden layers, and eliminates poisoned models through an
iterative pruning scheme. CrowdGuard employs a server-located stacked
clustering scheme to enhance its resilience to rogue client feedback. The
evaluation results demonstrate that CrowdGuard achieves a 100%
True-Positive-Rate and True-Negative-Rate across various scenarios, including
IID and non-IID data distributions. Additionally, CrowdGuard withstands
adaptive adversaries while preserving the original performance of protected
models. To ensure confidentiality, CrowdGuard uses a secure and
privacy-preserving architecture leveraging Trusted Execution Environments
(TEEs) on both client and server sides.
Related papers
- Concealing Backdoor Model Updates in Federated Learning by Trigger-Optimized Data Poisoning [20.69655306650485]
Federated Learning (FL) is a decentralized machine learning method that enables participants to collaboratively train a model without sharing their private data.
Despite its privacy and scalability benefits, FL is susceptible to backdoor attacks.
We propose DPOT, a backdoor attack strategy in FL that dynamically constructs backdoor objectives by optimizing a backdoor trigger.
arXiv Detail & Related papers (2024-05-10T02:44:25Z) - Robust Federated Learning Mitigates Client-side Training Data Distribution Inference Attacks [48.70867241987739]
InferGuard is a novel Byzantine-robust aggregation rule aimed at defending against client-side training data distribution inference attacks.
The results of our experiments indicate that our defense mechanism is highly effective in protecting against client-side training data distribution inference attacks.
arXiv Detail & Related papers (2024-03-05T17:41:35Z) - FLGuard: Byzantine-Robust Federated Learning via Ensemble of Contrastive
Models [2.7539214125526534]
Federated Learning (FL) thrives in training a global model with numerous clients.
Recent research proposed poisoning attacks that cause a catastrophic loss in the accuracy of the global model.
We propose FLGuard, a novel byzantine-robust FL method that detects malicious clients and discards malicious local updates.
arXiv Detail & Related papers (2024-03-05T10:36:27Z) - FedDefender: Client-Side Attack-Tolerant Federated Learning [60.576073964874]
Federated learning enables learning from decentralized data sources without compromising privacy.
It is vulnerable to model poisoning attacks, where malicious clients interfere with the training process.
We propose a new defense mechanism that focuses on the client-side, called FedDefender, to help benign clients train robust local models.
arXiv Detail & Related papers (2023-07-18T08:00:41Z) - FedDefender: Backdoor Attack Defense in Federated Learning [0.0]
Federated Learning (FL) is a privacy-preserving distributed machine learning technique.
We propose FedDefender, a defense mechanism against targeted poisoning attacks in FL.
arXiv Detail & Related papers (2023-07-02T03:40:04Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - Backdoor Attacks in Peer-to-Peer Federated Learning [11.235386862864397]
Peer-to-Peer Federated Learning (P2PFL) offer advantages in terms of both privacy and reliability.
We propose new backdoor attacks for P2PFL that leverage structural graph properties to select the malicious nodes, and achieve high attack success.
arXiv Detail & Related papers (2023-01-23T21:49:28Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - CRFL: Certifiably Robust Federated Learning against Backdoor Attacks [59.61565692464579]
This paper provides the first general framework, Certifiably Robust Federated Learning (CRFL), to train certifiably robust FL models against backdoors.
Our method exploits clipping and smoothing on model parameters to control the global model smoothness, which yields a sample-wise robustness certification on backdoors with limited magnitude.
arXiv Detail & Related papers (2021-06-15T16:50:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.