Bad-PFL: Exploring Backdoor Attacks against Personalized Federated Learning
- URL: http://arxiv.org/abs/2501.12736v1
- Date: Wed, 22 Jan 2025 09:12:16 GMT
- Title: Bad-PFL: Exploring Backdoor Attacks against Personalized Federated Learning
- Authors: Mingyuan Fan, Zhanyi Hu, Fuyi Wang, Cen Chen,
- Abstract summary: federated learning (PFL) enables each client to maintain a private personalized model to cater to client-specific knowledge.
Bad-PFL employs features from natural data as our trigger, ensuring its longevity in personalized models.
The large-scale experiments across three benchmark datasets demonstrate the superior performance of our attack against various PFL methods.
- Score: 22.074601909696298
- License:
- Abstract: Data heterogeneity and backdoor attacks rank among the most significant challenges facing federated learning (FL). For data heterogeneity, personalized federated learning (PFL) enables each client to maintain a private personalized model to cater to client-specific knowledge. Meanwhile, vanilla FL has proven vulnerable to backdoor attacks. However, recent advancements in PFL community have demonstrated a potential immunity against such attacks. This paper explores this intersection further, revealing that existing federated backdoor attacks fail in PFL because backdoors about manually designed triggers struggle to survive in personalized models. To tackle this, we design Bad-PFL, which employs features from natural data as our trigger. As long as the model is trained on natural data, it inevitably embeds the backdoor associated with our trigger, ensuring its longevity in personalized models. Moreover, our trigger undergoes mutual reinforcement training with the model, further solidifying the backdoor's durability and enhancing attack effectiveness. The large-scale experiments across three benchmark datasets demonstrate the superior performance of our attack against various PFL methods, even when equipped with state-of-the-art defense mechanisms.
Related papers
- Just a Simple Transformation is Enough for Data Protection in Vertical Federated Learning [83.90283731845867]
We consider feature reconstruction attacks, a common risk targeting input data compromise.
We show that Federated-based models are resistant to state-of-the-art feature reconstruction attacks.
arXiv Detail & Related papers (2024-12-16T12:02:12Z) - BadSFL: Backdoor Attack against Scaffold Federated Learning [16.104941796138128]
Federated learning (FL) enables the training of deep learning models on distributed clients to preserve data privacy.
BadSFL is a novel backdoor attack method designed for the FL framework using the scaffold aggregation algorithm in non-IID settings.
BadSFL is effective over 60 rounds in the global model and up to 3 times longer than existing baseline attacks.
arXiv Detail & Related papers (2024-11-25T07:46:57Z) - Lurking in the shadows: Unveiling Stealthy Backdoor Attacks against Personalized Federated Learning [31.386836775526685]
We propose textitPFedBA, a stealthy and effective backdoor attack strategy applicable to PFL systems.
Our study sheds light on the subtle yet potent backdoor threats to PFL systems, urging the community to bolster defenses against emerging backdoor challenges.
arXiv Detail & Related papers (2024-06-10T12:14:05Z) - Practical and General Backdoor Attacks against Vertical Federated
Learning [3.587415228422117]
Federated learning (FL) aims to facilitate data collaboration across multiple organizations without exposing data privacy.
BadVFL is a novel and practical approach to inject backdoor triggers into victim models without label information.
BadVFL achieves over 93% attack success rate with only 1% poisoning rate.
arXiv Detail & Related papers (2023-06-19T07:30:01Z) - Revisiting Personalized Federated Learning: Robustness Against Backdoor
Attacks [53.81129518924231]
We conduct the first study of backdoor attacks in the pFL framework.
We show that pFL methods with partial model-sharing can significantly boost robustness against backdoor attacks.
We propose a lightweight defense method, Simple-Tuning, which empirically improves defense performance against backdoor attacks.
arXiv Detail & Related papers (2023-02-03T11:58:14Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - Neurotoxin: Durable Backdoors in Federated Learning [73.82725064553827]
federated learning systems have an inherent vulnerability during their training to adversarial backdoor attacks.
We propose Neurotoxin, a simple one-line modification to existing backdoor attacks that acts by attacking parameters that are changed less in magnitude during training.
arXiv Detail & Related papers (2022-06-12T16:52:52Z) - CRFL: Certifiably Robust Federated Learning against Backdoor Attacks [59.61565692464579]
This paper provides the first general framework, Certifiably Robust Federated Learning (CRFL), to train certifiably robust FL models against backdoors.
Our method exploits clipping and smoothing on model parameters to control the global model smoothness, which yields a sample-wise robustness certification on backdoors with limited magnitude.
arXiv Detail & Related papers (2021-06-15T16:50:54Z) - Meta Federated Learning [57.52103907134841]
Federated Learning (FL) is vulnerable to training time adversarial attacks.
We propose Meta Federated Learning ( Meta-FL) which not only is compatible with secure aggregation protocol but also facilitates defense against backdoor attacks.
arXiv Detail & Related papers (2021-02-10T16:48:32Z) - Defending against Backdoors in Federated Learning with Robust Learning
Rate [25.74681620689152]
Federated learning (FL) allows a set of agents to collaboratively train a model without sharing their potentially sensitive data.
In a backdoor attack, an adversary tries to embed a backdoor functionality to the model during training that can later be activated to cause a desired misclassification.
We propose a lightweight defense that requires minimal change to the FL protocol.
arXiv Detail & Related papers (2020-07-07T23:38:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.