Revisiting Personalized Federated Learning: Robustness Against Backdoor
Attacks
- URL: http://arxiv.org/abs/2302.01677v2
- Date: Mon, 5 Jun 2023 08:35:02 GMT
- Title: Revisiting Personalized Federated Learning: Robustness Against Backdoor
Attacks
- Authors: Zeyu Qin, Liuyi Yao, Daoyuan Chen, Yaliang Li, Bolin Ding, Minhao
Cheng
- Abstract summary: We conduct the first study of backdoor attacks in the pFL framework.
We show that pFL methods with partial model-sharing can significantly boost robustness against backdoor attacks.
We propose a lightweight defense method, Simple-Tuning, which empirically improves defense performance against backdoor attacks.
- Score: 53.81129518924231
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, besides improving prediction accuracy, we study whether
personalization could bring robustness benefits to backdoor attacks. We conduct
the first study of backdoor attacks in the pFL framework, testing 4 widely used
backdoor attacks against 6 pFL methods on benchmark datasets FEMNIST and
CIFAR-10, a total of 600 experiments. The study shows that pFL methods with
partial model-sharing can significantly boost robustness against backdoor
attacks. In contrast, pFL methods with full model-sharing do not show
robustness. To analyze the reasons for varying robustness performances, we
provide comprehensive ablation studies on different pFL methods. Based on our
findings, we further propose a lightweight defense method, Simple-Tuning, which
empirically improves defense performance against backdoor attacks. We believe
that our work could provide both guidance for pFL application in terms of its
robustness and offer valuable insights to design more robust FL methods in the
future. We open-source our code to establish the first benchmark for black-box
backdoor attacks in pFL:
https://github.com/alibaba/FederatedScope/tree/backdoor-bench.
Related papers
- Revisiting Backdoor Attacks against Large Vision-Language Models [76.42014292255944]
This paper empirically examines the generalizability of backdoor attacks during the instruction tuning of LVLMs.
We modify existing backdoor attacks based on the above key observations.
This paper underscores that even simple traditional backdoor strategies pose a serious threat to LVLMs.
arXiv Detail & Related papers (2024-06-27T02:31:03Z) - Lurking in the shadows: Unveiling Stealthy Backdoor Attacks against Personalized Federated Learning [31.386836775526685]
We propose textitPFedBA, a stealthy and effective backdoor attack strategy applicable to PFL systems.
Our study sheds light on the subtle yet potent backdoor threats to PFL systems, urging the community to bolster defenses against emerging backdoor challenges.
arXiv Detail & Related papers (2024-06-10T12:14:05Z) - You Can Backdoor Personalized Federated Learning [18.91908598410108]
Existing research primarily focuses on backdoor attacks and defenses within the generic federated learning scenario.
We propose a two-pronged attack method, BapFL, which comprises two simple yet effective strategies.
arXiv Detail & Related papers (2023-07-29T12:25:04Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - Unraveling the Connections between Privacy and Certified Robustness in
Federated Learning Against Poisoning Attacks [68.20436971825941]
Federated learning (FL) provides an efficient paradigm to jointly train a global model leveraging data from distributed users.
Several studies have shown that FL is vulnerable to poisoning attacks.
To protect the privacy of local users, FL is usually trained in a differentially private way.
arXiv Detail & Related papers (2022-09-08T21:01:42Z) - CRFL: Certifiably Robust Federated Learning against Backdoor Attacks [59.61565692464579]
This paper provides the first general framework, Certifiably Robust Federated Learning (CRFL), to train certifiably robust FL models against backdoors.
Our method exploits clipping and smoothing on model parameters to control the global model smoothness, which yields a sample-wise robustness certification on backdoors with limited magnitude.
arXiv Detail & Related papers (2021-06-15T16:50:54Z) - Meta Federated Learning [57.52103907134841]
Federated Learning (FL) is vulnerable to training time adversarial attacks.
We propose Meta Federated Learning ( Meta-FL) which not only is compatible with secure aggregation protocol but also facilitates defense against backdoor attacks.
arXiv Detail & Related papers (2021-02-10T16:48:32Z) - BaFFLe: Backdoor detection via Feedback-based Federated Learning [3.6895394817068357]
We propose Backdoor detection via Feedback-based Federated Learning (BAFFLE)
We show that BAFFLE reliably detects state-of-the-art backdoor attacks with a detection accuracy of 100% and a false-positive rate below 5%.
arXiv Detail & Related papers (2020-11-04T07:44:51Z) - Defending against Backdoors in Federated Learning with Robust Learning
Rate [25.74681620689152]
Federated learning (FL) allows a set of agents to collaboratively train a model without sharing their potentially sensitive data.
In a backdoor attack, an adversary tries to embed a backdoor functionality to the model during training that can later be activated to cause a desired misclassification.
We propose a lightweight defense that requires minimal change to the FL protocol.
arXiv Detail & Related papers (2020-07-07T23:38:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.