You Can Backdoor Personalized Federated Learning
- URL: http://arxiv.org/abs/2307.15971v2
- Date: Mon, 18 Sep 2023 06:25:32 GMT
- Title: You Can Backdoor Personalized Federated Learning
- Authors: Tiandi Ye, Cen Chen, Yinggui Wang, Xiang Li and Ming Gao
- Abstract summary: Existing research primarily focuses on backdoor attacks and defenses within the generic federated learning scenario.
We propose a two-pronged attack method, BapFL, which comprises two simple yet effective strategies.
- Score: 18.91908598410108
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing research primarily focuses on backdoor attacks and defenses within
the generic federated learning scenario, where all clients collaborate to train
a single global model. A recent study conducted by Qin et al. (2023) marks the
initial exploration of backdoor attacks within the personalized federated
learning (pFL) scenario, where each client constructs a personalized model
based on its local data. Notably, the study demonstrates that pFL methods with
\textit{parameter decoupling} can significantly enhance robustness against
backdoor attacks. However, in this paper, we whistleblow that pFL methods with
parameter decoupling are still vulnerable to backdoor attacks. The resistance
of pFL methods with parameter decoupling is attributed to the heterogeneous
classifiers between malicious clients and benign counterparts. We analyze two
direct causes of the heterogeneous classifiers: (1) data heterogeneity
inherently exists among clients and (2) poisoning by malicious clients further
exacerbates the data heterogeneity. To address these issues, we propose a
two-pronged attack method, BapFL, which comprises two simple yet effective
strategies: (1) poisoning only the feature encoder while keeping the classifier
fixed and (2) diversifying the classifier through noise introduction to
simulate that of the benign clients. Extensive experiments on three benchmark
datasets under varying conditions demonstrate the effectiveness of our proposed
attack. Additionally, we evaluate the effectiveness of six widely used defense
methods and find that BapFL still poses a significant threat even in the
presence of the best defense, Multi-Krum. We hope to inspire further research
on attack and defense strategies in pFL scenarios. The code is available at:
https://github.com/BapFL/code.
Related papers
- Can We Trust the Unlabeled Target Data? Towards Backdoor Attack and Defense on Model Adaptation [120.42853706967188]
We explore the potential backdoor attacks on model adaptation launched by well-designed poisoning target data.
We propose a plug-and-play method named MixAdapt, combining it with existing adaptation algorithms.
arXiv Detail & Related papers (2024-01-11T16:42:10Z) - Towards Attack-tolerant Federated Learning via Critical Parameter
Analysis [85.41873993551332]
Federated learning systems are susceptible to poisoning attacks when malicious clients send false updates to the central server.
This paper proposes a new defense strategy, FedCPA (Federated learning with Critical Analysis)
Our attack-tolerant aggregation method is based on the observation that benign local models have similar sets of top-k and bottom-k critical parameters, whereas poisoned local models do not.
arXiv Detail & Related papers (2023-08-18T05:37:55Z) - Practical and General Backdoor Attacks against Vertical Federated
Learning [3.587415228422117]
Federated learning (FL) aims to facilitate data collaboration across multiple organizations without exposing data privacy.
BadVFL is a novel and practical approach to inject backdoor triggers into victim models without label information.
BadVFL achieves over 93% attack success rate with only 1% poisoning rate.
arXiv Detail & Related papers (2023-06-19T07:30:01Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - Mitigating Backdoors in Federated Learning with FLD [7.908496863030483]
Federated learning allows clients to collaboratively train a global model without uploading raw data for privacy preservation.
This feature has recently been found responsible for federated learning's vulnerability in the face of backdoor attacks.
We propose Federated Layer Detection (FLD), a novel model filtering approach for effectively defending against backdoor attacks.
arXiv Detail & Related papers (2023-03-01T07:54:54Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - Backdoor Defense in Federated Learning Using Differential Testing and
Outlier Detection [24.562359531692504]
We propose DifFense, an automated defense framework to protect an FL system from backdoor attacks.
Our detection method reduces the average backdoor accuracy of the global model to below 4% and achieves a false negative rate of zero.
arXiv Detail & Related papers (2022-02-21T17:13:03Z) - CRFL: Certifiably Robust Federated Learning against Backdoor Attacks [59.61565692464579]
This paper provides the first general framework, Certifiably Robust Federated Learning (CRFL), to train certifiably robust FL models against backdoors.
Our method exploits clipping and smoothing on model parameters to control the global model smoothness, which yields a sample-wise robustness certification on backdoors with limited magnitude.
arXiv Detail & Related papers (2021-06-15T16:50:54Z) - Curse or Redemption? How Data Heterogeneity Affects the Robustness of
Federated Learning [51.15273664903583]
Data heterogeneity has been identified as one of the key features in federated learning but often overlooked in the lens of robustness to adversarial attacks.
This paper focuses on characterizing and understanding its impact on backdooring attacks in federated learning through comprehensive experiments using synthetic and the LEAF benchmarks.
arXiv Detail & Related papers (2021-02-01T06:06:21Z) - BaFFLe: Backdoor detection via Feedback-based Federated Learning [3.6895394817068357]
We propose Backdoor detection via Feedback-based Federated Learning (BAFFLE)
We show that BAFFLE reliably detects state-of-the-art backdoor attacks with a detection accuracy of 100% and a false-positive rate below 5%.
arXiv Detail & Related papers (2020-11-04T07:44:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.