On the Vulnerability of Backdoor Defenses for Federated Learning
- URL: http://arxiv.org/abs/2301.08170v1
- Date: Thu, 19 Jan 2023 17:02:02 GMT
- Title: On the Vulnerability of Backdoor Defenses for Federated Learning
- Authors: Pei Fang, Jinghui Chen
- Abstract summary: Federated Learning (FL) is a popular distributed machine learning paradigm that enables jointly training a global model without sharing clients' data.
In this paper, we study whether the current defense mechanisms truly neutralize the backdoor threats from federated learning.
We propose a new federated backdoor attack method for possible countermeasures.
- Score: 8.345632941376673
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) is a popular distributed machine learning paradigm
that enables jointly training a global model without sharing clients' data.
However, its repetitive server-client communication gives room for backdoor
attacks with aim to mislead the global model into a targeted misprediction when
a specific trigger pattern is presented. In response to such backdoor threats
on federated learning, various defense measures have been proposed. In this
paper, we study whether the current defense mechanisms truly neutralize the
backdoor threats from federated learning in a practical setting by proposing a
new federated backdoor attack method for possible countermeasures. Different
from traditional training (on triggered data) and rescaling (the malicious
client model) based backdoor injection, the proposed backdoor attack framework
(1) directly modifies (a small proportion of) local model weights to inject the
backdoor trigger via sign flips; (2) jointly optimize the trigger pattern with
the client model, thus is more persistent and stealthy for circumventing
existing defenses. In a case study, we examine the strength and weaknesses of
recent federated backdoor defenses from three major categories and provide
suggestions to the practitioners when training federated models in practice.
Related papers
- Efficient Backdoor Defense in Multimodal Contrastive Learning: A Token-Level Unlearning Method for Mitigating Threats [52.94388672185062]
We propose an efficient defense mechanism against backdoor threats using a concept known as machine unlearning.
This entails strategically creating a small set of poisoned samples to aid the model's rapid unlearning of backdoor vulnerabilities.
In the backdoor unlearning process, we present a novel token-based portion unlearning training regime.
arXiv Detail & Related papers (2024-09-29T02:55:38Z) - BadCLIP: Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive
Learning [85.2564206440109]
This paper reveals the threats in this practical scenario that backdoor attacks can remain effective even after defenses.
We introduce the emphtoolns attack, which is resistant to backdoor detection and model fine-tuning defenses.
arXiv Detail & Related papers (2023-11-20T02:21:49Z) - Shared Adversarial Unlearning: Backdoor Mitigation by Unlearning Shared
Adversarial Examples [67.66153875643964]
Backdoor attacks are serious security threats to machine learning models.
In this paper, we explore the task of purifying a backdoored model using a small clean dataset.
By establishing the connection between backdoor risk and adversarial risk, we derive a novel upper bound for backdoor risk.
arXiv Detail & Related papers (2023-07-20T03:56:04Z) - FedDefender: Client-Side Attack-Tolerant Federated Learning [60.576073964874]
Federated learning enables learning from decentralized data sources without compromising privacy.
It is vulnerable to model poisoning attacks, where malicious clients interfere with the training process.
We propose a new defense mechanism that focuses on the client-side, called FedDefender, to help benign clients train robust local models.
arXiv Detail & Related papers (2023-07-18T08:00:41Z) - Invariant Aggregator for Defending against Federated Backdoor Attacks [28.416262423174796]
Federated learning enables training high-utility models across several clients without directly sharing their private data.
As a downside, the federated setting makes the model vulnerable to various adversarial attacks in the presence of malicious clients.
We propose an invariant aggregator that redirects the aggregated update to invariant directions that are generally useful.
arXiv Detail & Related papers (2022-10-04T18:06:29Z) - Towards a Defense against Backdoor Attacks in Continual Federated
Learning [26.536009090970257]
We propose a novel framework for defending against backdoor attacks in the federated continual learning setting.
Our framework trains two models in parallel: a backbone model and a shadow model.
We show experimentally that our framework significantly improves upon existing defenses against backdoor attacks.
arXiv Detail & Related papers (2022-05-24T03:04:21Z) - On the Effectiveness of Adversarial Training against Backdoor Attacks [111.8963365326168]
A backdoored model always predicts a target class in the presence of a predefined trigger pattern.
In general, adversarial training is believed to defend against backdoor attacks.
We propose a hybrid strategy which provides satisfactory robustness across different backdoor attacks.
arXiv Detail & Related papers (2022-02-22T02:24:46Z) - Widen The Backdoor To Let More Attackers In [24.540853975732922]
We investigate the scenario of a multi-agent backdoor attack, where multiple non-colluding attackers craft and insert triggered samples in a shared dataset.
We discover a clear backfiring phenomenon: increasing the number of attackers shrinks each attacker's attack success rate.
We then exploit this phenomenon to minimize the collective ASR of attackers and maximize defender's robustness accuracy.
arXiv Detail & Related papers (2021-10-09T13:53:57Z) - Backdoor Attacks on Federated Learning with Lottery Ticket Hypothesis [49.38856542573576]
Edge devices in federated learning usually have much more limited computation and communication resources compared to servers in a data center.
In this work, we empirically demonstrate that Lottery Ticket models are equally vulnerable to backdoor attacks as the original dense models.
arXiv Detail & Related papers (2021-09-22T04:19:59Z) - Backdoor Attacks on Federated Meta-Learning [0.225596179391365]
We analyze the effects of backdoor attacks on federated meta-learning.
We propose a defense mechanism inspired by matching networks, where the class of an input is predicted from the similarity of its features.
arXiv Detail & Related papers (2020-06-12T09:23:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.