Mitigating Backdoors in Federated Learning with FLD
- URL: http://arxiv.org/abs/2303.00302v2
- Date: Mon, 18 Dec 2023 15:18:11 GMT
- Title: Mitigating Backdoors in Federated Learning with FLD
- Authors: Yihang Lin, Pengyuan Zhou, Zhiqian Wu, Yong Liao
- Abstract summary: Federated learning allows clients to collaboratively train a global model without uploading raw data for privacy preservation.
This feature has recently been found responsible for federated learning's vulnerability in the face of backdoor attacks.
We propose Federated Layer Detection (FLD), a novel model filtering approach for effectively defending against backdoor attacks.
- Score: 7.908496863030483
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning allows clients to collaboratively train a global model
without uploading raw data for privacy preservation. This feature, i.e., the
inability to review participants' datasets, has recently been found responsible
for federated learning's vulnerability in the face of backdoor attacks.
Existing defense methods fall short from two perspectives: 1) they consider
only very specific and limited attacker models and unable to cope with advanced
backdoor attacks, such as distributed backdoor attacks, which break down the
global trigger into multiple distributed triggers. 2) they conduct detection
based on model granularity thus the performance gets impacted by the model
dimension. To address these challenges, we propose Federated Layer Detection
(FLD), a novel model filtering approach for effectively defending against
backdoor attacks. FLD examines the models based on layer granularity to capture
the complete model details and effectively detect potential backdoor models
regardless of model dimension. We provide theoretical analysis and proof for
the convergence of FLD. Extensive experiments demonstrate that FLD effectively
mitigates state-of-the-art backdoor attacks with negligible impact on the
accuracy of the primary task.
Related papers
- Efficient Backdoor Defense in Multimodal Contrastive Learning: A Token-Level Unlearning Method for Mitigating Threats [52.94388672185062]
We propose an efficient defense mechanism against backdoor threats using a concept known as machine unlearning.
This entails strategically creating a small set of poisoned samples to aid the model's rapid unlearning of backdoor vulnerabilities.
In the backdoor unlearning process, we present a novel token-based portion unlearning training regime.
arXiv Detail & Related papers (2024-09-29T02:55:38Z) - Concealing Backdoor Model Updates in Federated Learning by Trigger-Optimized Data Poisoning [20.69655306650485]
Federated Learning (FL) is a decentralized machine learning method that enables participants to collaboratively train a model without sharing their private data.
Despite its privacy and scalability benefits, FL is susceptible to backdoor attacks.
We propose DPOT, a backdoor attack strategy in FL that dynamically constructs backdoor objectives by optimizing a backdoor trigger.
arXiv Detail & Related papers (2024-05-10T02:44:25Z) - Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models [112.48136829374741]
In this paper, we unveil a new vulnerability: the privacy backdoor attack.
When a victim fine-tunes a backdoored model, their training data will be leaked at a significantly higher rate than if they had fine-tuned a typical model.
Our findings highlight a critical privacy concern within the machine learning community and call for a reevaluation of safety protocols in the use of open-source pre-trained models.
arXiv Detail & Related papers (2024-04-01T16:50:54Z) - Unlearning Backdoor Threats: Enhancing Backdoor Defense in Multimodal Contrastive Learning via Local Token Unlearning [49.242828934501986]
Multimodal contrastive learning has emerged as a powerful paradigm for building high-quality features.
backdoor attacks subtly embed malicious behaviors within the model during training.
We introduce an innovative token-based localized forgetting training regime.
arXiv Detail & Related papers (2024-03-24T18:33:15Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - Backdoor Defense in Federated Learning Using Differential Testing and
Outlier Detection [24.562359531692504]
We propose DifFense, an automated defense framework to protect an FL system from backdoor attacks.
Our detection method reduces the average backdoor accuracy of the global model to below 4% and achieves a false negative rate of zero.
arXiv Detail & Related papers (2022-02-21T17:13:03Z) - Identifying Backdoor Attacks in Federated Learning via Anomaly Detection [31.197488921578984]
Federated learning is vulnerable to backdoor attacks.
This paper proposes an effective defense against the attack by examining shared model updates.
We demonstrate through extensive analyses that our proposed methods effectively mitigate state-of-the-art backdoor attacks.
arXiv Detail & Related papers (2022-02-09T07:07:42Z) - DeepSight: Mitigating Backdoor Attacks in Federated Learning Through
Deep Model Inspection [26.593268413299228]
Federated Learning (FL) allows multiple clients to collaboratively train a Neural Network (NN) model on their private data without revealing the data.
DeepSight is a novel model filtering approach for mitigating backdoor attacks.
We show that it can mitigate state-of-the-art backdoor attacks with a negligible impact on the model's performance on benign data.
arXiv Detail & Related papers (2022-01-03T17:10:07Z) - CRFL: Certifiably Robust Federated Learning against Backdoor Attacks [59.61565692464579]
This paper provides the first general framework, Certifiably Robust Federated Learning (CRFL), to train certifiably robust FL models against backdoors.
Our method exploits clipping and smoothing on model parameters to control the global model smoothness, which yields a sample-wise robustness certification on backdoors with limited magnitude.
arXiv Detail & Related papers (2021-06-15T16:50:54Z) - BaFFLe: Backdoor detection via Feedback-based Federated Learning [3.6895394817068357]
We propose Backdoor detection via Feedback-based Federated Learning (BAFFLE)
We show that BAFFLE reliably detects state-of-the-art backdoor attacks with a detection accuracy of 100% and a false-positive rate below 5%.
arXiv Detail & Related papers (2020-11-04T07:44:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.