Towards a Defense against Backdoor Attacks in Continual Federated
Learning
- URL: http://arxiv.org/abs/2205.11736v2
- Date: Thu, 26 May 2022 15:41:30 GMT
- Title: Towards a Defense against Backdoor Attacks in Continual Federated
Learning
- Authors: Shuaiqi Wang, Jonathan Hayase, Giulia Fanti, Sewoong Oh
- Abstract summary: We propose a novel framework for defending against backdoor attacks in the federated continual learning setting.
Our framework trains two models in parallel: a backbone model and a shadow model.
We show experimentally that our framework significantly improves upon existing defenses against backdoor attacks.
- Score: 26.536009090970257
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Backdoor attacks are a major concern in federated learning (FL) pipelines
where training data is sourced from untrusted clients over long periods of time
(i.e., continual learning). Preventing such attacks is difficult because
defenders in FL do not have access to raw training data. Moreover, in a
phenomenon we call backdoor leakage, models trained continuously eventually
suffer from backdoors due to cumulative errors in backdoor defense mechanisms.
We propose a novel framework for defending against backdoor attacks in the
federated continual learning setting. Our framework trains two models in
parallel: a backbone model and a shadow model. The backbone is trained without
any defense mechanism to obtain good performance on the main task. The shadow
model combines recent ideas from robust covariance estimation-based filters
with early-stopping to control the attack success rate even as the data
distribution changes. We provide theoretical motivation for this design and
show experimentally that our framework significantly improves upon existing
defenses against backdoor attacks.
Related papers
- Efficient Backdoor Defense in Multimodal Contrastive Learning: A Token-Level Unlearning Method for Mitigating Threats [52.94388672185062]
We propose an efficient defense mechanism against backdoor threats using a concept known as machine unlearning.
This entails strategically creating a small set of poisoned samples to aid the model's rapid unlearning of backdoor vulnerabilities.
In the backdoor unlearning process, we present a novel token-based portion unlearning training regime.
arXiv Detail & Related papers (2024-09-29T02:55:38Z) - Non-Cooperative Backdoor Attacks in Federated Learning: A New Threat Landscape [7.00762739959285]
Federated Learning (FL) for privacy-preserving model training remains susceptible to backdoor attacks.
This research emphasizes the critical need for robust defenses against diverse backdoor attacks in the evolving FL landscape.
arXiv Detail & Related papers (2024-07-05T22:03:13Z) - Breaking the False Sense of Security in Backdoor Defense through Re-Activation Attack [32.74007523929888]
We re-investigate the characteristics of backdoored models after defense.
We find that the original backdoors still exist in defense models derived from existing post-training defense strategies.
We empirically show that these dormant backdoors can be easily re-activated during inference.
arXiv Detail & Related papers (2024-05-25T08:57:30Z) - Mitigating Backdoor Attack by Injecting Proactive Defensive Backdoor [63.84477483795964]
Data-poisoning backdoor attacks are serious security threats to machine learning models.
In this paper, we focus on in-training backdoor defense, aiming to train a clean model even when the dataset may be potentially poisoned.
We propose a novel defense approach called PDB (Proactive Defensive Backdoor)
arXiv Detail & Related papers (2024-05-25T07:52:26Z) - Rethinking Backdoor Attacks [122.1008188058615]
In a backdoor attack, an adversary inserts maliciously constructed backdoor examples into a training set to make the resulting model vulnerable to manipulation.
Defending against such attacks typically involves viewing these inserted examples as outliers in the training set and using techniques from robust statistics to detect and remove them.
We show that without structural information about the training data distribution, backdoor attacks are indistinguishable from naturally-occurring features in the data.
arXiv Detail & Related papers (2023-07-19T17:44:54Z) - Backdoor Defense via Adaptively Splitting Poisoned Dataset [57.70673801469096]
Backdoor defenses have been studied to alleviate the threat of deep neural networks (DNNs) being backdoor attacked and maliciously altered.
We argue that the core of training-time defense is to select poisoned samples and to handle them properly.
Under our framework, we propose an adaptively splitting dataset-based defense (ASD)
arXiv Detail & Related papers (2023-03-23T02:16:38Z) - Learning to Backdoor Federated Learning [9.046972927978997]
In a federated learning (FL) system, malicious participants can easily embed backdoors into the aggregated model.
We propose a general reinforcement learning-based backdoor attack framework.
Our framework is both adaptive and flexible and achieves strong attack performance and durability even under state-of-the-art defenses.
arXiv Detail & Related papers (2023-03-06T17:47:04Z) - On the Vulnerability of Backdoor Defenses for Federated Learning [8.345632941376673]
Federated Learning (FL) is a popular distributed machine learning paradigm that enables jointly training a global model without sharing clients' data.
In this paper, we study whether the current defense mechanisms truly neutralize the backdoor threats from federated learning.
We propose a new federated backdoor attack method for possible countermeasures.
arXiv Detail & Related papers (2023-01-19T17:02:02Z) - On the Effectiveness of Adversarial Training against Backdoor Attacks [111.8963365326168]
A backdoored model always predicts a target class in the presence of a predefined trigger pattern.
In general, adversarial training is believed to defend against backdoor attacks.
We propose a hybrid strategy which provides satisfactory robustness across different backdoor attacks.
arXiv Detail & Related papers (2022-02-22T02:24:46Z) - Backdoor Learning: A Survey [75.59571756777342]
Backdoor attack intends to embed hidden backdoor into deep neural networks (DNNs)
Backdoor learning is an emerging and rapidly growing research area.
This paper presents the first comprehensive survey of this realm.
arXiv Detail & Related papers (2020-07-17T04:09:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.