Defending against Backdoors in Federated Learning with Robust Learning
Rate
- URL: http://arxiv.org/abs/2007.03767v4
- Date: Thu, 29 Jul 2021 21:40:02 GMT
- Title: Defending against Backdoors in Federated Learning with Robust Learning
Rate
- Authors: Mustafa Safa Ozdayi, Murat Kantarcioglu, Yulia R. Gel
- Abstract summary: Federated learning (FL) allows a set of agents to collaboratively train a model without sharing their potentially sensitive data.
In a backdoor attack, an adversary tries to embed a backdoor functionality to the model during training that can later be activated to cause a desired misclassification.
We propose a lightweight defense that requires minimal change to the FL protocol.
- Score: 25.74681620689152
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) allows a set of agents to collaboratively train a
model without sharing their potentially sensitive data. This makes FL suitable
for privacy-preserving applications. At the same time, FL is susceptible to
adversarial attacks due to decentralized and unvetted data. One important line
of attacks against FL is the backdoor attacks. In a backdoor attack, an
adversary tries to embed a backdoor functionality to the model during training
that can later be activated to cause a desired misclassification. To prevent
backdoor attacks, we propose a lightweight defense that requires minimal change
to the FL protocol. At a high level, our defense is based on carefully
adjusting the aggregation server's learning rate, per dimension and per round,
based on the sign information of agents' updates. We first conjecture the
necessary steps to carry a successful backdoor attack in FL setting, and then,
explicitly formulate the defense based on our conjecture. Through experiments,
we provide empirical evidence that supports our conjecture, and we test our
defense against backdoor attacks under different settings. We observe that
either backdoor is completely eliminated, or its accuracy is significantly
reduced. Overall, our experiments suggest that our defense significantly
outperforms some of the recently proposed defenses in the literature. We
achieve this by having minimal influence over the accuracy of the trained
models. In addition, we also provide convergence rate analysis for our proposed
scheme.
Related papers
- Efficient Backdoor Defense in Multimodal Contrastive Learning: A Token-Level Unlearning Method for Mitigating Threats [52.94388672185062]
We propose an efficient defense mechanism against backdoor threats using a concept known as machine unlearning.
This entails strategically creating a small set of poisoned samples to aid the model's rapid unlearning of backdoor vulnerabilities.
In the backdoor unlearning process, we present a novel token-based portion unlearning training regime.
arXiv Detail & Related papers (2024-09-29T02:55:38Z) - Distributed Backdoor Attacks on Federated Graph Learning and Certified Defenses [50.53476890313741]
We propose an effective, stealthy, and persistent backdoor attack on FedGL.
We develop a certified defense for any backdoored FedGL model against the trigger with any shape at any location.
Our attack results show our attack can obtain > 90% backdoor accuracy in almost all datasets.
arXiv Detail & Related papers (2024-07-12T02:43:44Z) - Non-Cooperative Backdoor Attacks in Federated Learning: A New Threat Landscape [7.00762739959285]
Federated Learning (FL) for privacy-preserving model training remains susceptible to backdoor attacks.
This research emphasizes the critical need for robust defenses against diverse backdoor attacks in the evolving FL landscape.
arXiv Detail & Related papers (2024-07-05T22:03:13Z) - Mitigating Backdoor Attack by Injecting Proactive Defensive Backdoor [63.84477483795964]
Data-poisoning backdoor attacks are serious security threats to machine learning models.
In this paper, we focus on in-training backdoor defense, aiming to train a clean model even when the dataset may be potentially poisoned.
We propose a novel defense approach called PDB (Proactive Defensive Backdoor)
arXiv Detail & Related papers (2024-05-25T07:52:26Z) - FedGrad: Mitigating Backdoor Attacks in Federated Learning Through Local
Ultimate Gradients Inspection [3.3711670942444014]
Federated learning (FL) enables multiple clients to train a model without compromising sensitive data.
The decentralized nature of FL makes it susceptible to adversarial attacks, especially backdoor insertion during training.
We propose FedGrad, a backdoor-resistant defense for FL that is resistant to cutting-edge backdoor attacks.
arXiv Detail & Related papers (2023-04-29T19:31:44Z) - Revisiting Personalized Federated Learning: Robustness Against Backdoor
Attacks [53.81129518924231]
We conduct the first study of backdoor attacks in the pFL framework.
We show that pFL methods with partial model-sharing can significantly boost robustness against backdoor attacks.
We propose a lightweight defense method, Simple-Tuning, which empirically improves defense performance against backdoor attacks.
arXiv Detail & Related papers (2023-02-03T11:58:14Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - Towards a Defense against Backdoor Attacks in Continual Federated
Learning [26.536009090970257]
We propose a novel framework for defending against backdoor attacks in the federated continual learning setting.
Our framework trains two models in parallel: a backbone model and a shadow model.
We show experimentally that our framework significantly improves upon existing defenses against backdoor attacks.
arXiv Detail & Related papers (2022-05-24T03:04:21Z) - Meta Federated Learning [57.52103907134841]
Federated Learning (FL) is vulnerable to training time adversarial attacks.
We propose Meta Federated Learning ( Meta-FL) which not only is compatible with secure aggregation protocol but also facilitates defense against backdoor attacks.
arXiv Detail & Related papers (2021-02-10T16:48:32Z) - BlockFLA: Accountable Federated Learning via Hybrid Blockchain
Architecture [11.908715869667445]
Federated Learning (FL) is a distributed, and decentralized machine learning protocol.
It has been shown that an attacker can inject backdoors to the trained model during FL.
We develop a hybrid blockchain-based FL framework that uses smart contracts to automatically detect, and punish the attackers.
arXiv Detail & Related papers (2020-10-14T22:43:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.