FTA: Stealthy and Adaptive Backdoor Attack with Flexible Triggers on
Federated Learning
- URL: http://arxiv.org/abs/2309.00127v2
- Date: Thu, 28 Sep 2023 19:06:41 GMT
- Title: FTA: Stealthy and Adaptive Backdoor Attack with Flexible Triggers on
Federated Learning
- Authors: Yanqi Qiao, Dazhuang Liu, Congwen Chen, Rui Wang, Kaitai Liang
- Abstract summary: We propose a new stealthy and robust backdoor attack against federated learning (FL) defenses.
We build a generative trigger function that can learn to manipulate benign samples with an imperceptible flexible trigger pattern.
Our trigger generator can keep learning and adapt across different rounds, allowing it to adjust to changes in the global model.
- Score: 11.636353298724574
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current backdoor attacks against federated learning (FL) strongly rely on
universal triggers or semantic patterns, which can be easily detected and
filtered by certain defense mechanisms such as norm clipping, comparing
parameter divergences among local updates. In this work, we propose a new
stealthy and robust backdoor attack with flexible triggers against FL defenses.
To achieve this, we build a generative trigger function that can learn to
manipulate the benign samples with an imperceptible flexible trigger pattern
and simultaneously make the trigger pattern include the most significant hidden
features of the attacker-chosen label. Moreover, our trigger generator can keep
learning and adapt across different rounds, allowing it to adjust to changes in
the global model. By filling the distinguishable difference (the mapping
between the trigger pattern and target label), we make our attack naturally
stealthy. Extensive experiments on real-world datasets verify the effectiveness
and stealthiness of our attack compared to prior attacks on decentralized
learning framework with eight well-studied defenses.
Related papers
- Twin Trigger Generative Networks for Backdoor Attacks against Object Detection [14.578800906364414]
Object detectors, which are widely used in real-world applications, are vulnerable to backdoor attacks.
Most research on backdoor attacks has focused on image classification, with limited investigation into object detection.
We propose novel twin trigger generative networks to generate invisible triggers for implanting backdoors into models during training, and visible triggers for steady activation during inference.
arXiv Detail & Related papers (2024-11-23T03:46:45Z) - DeTrigger: A Gradient-Centric Approach to Backdoor Attack Mitigation in Federated Learning [4.932796168357307]
Federated Learning (FL) enables collaborative model training across distributed devices while preserving local data privacy, making it ideal for mobile and embedded systems.
However, the decentralized nature of FL also opens vulnerabilities to model poisoning attacks, particularly backdoor attacks.
We propose DeTrigger, a scalable and efficient backdoor-robust federated learning framework.
arXiv Detail & Related papers (2024-11-19T04:12:14Z) - Non-Cooperative Backdoor Attacks in Federated Learning: A New Threat Landscape [7.00762739959285]
Federated Learning (FL) for privacy-preserving model training remains susceptible to backdoor attacks.
This research emphasizes the critical need for robust defenses against diverse backdoor attacks in the evolving FL landscape.
arXiv Detail & Related papers (2024-07-05T22:03:13Z) - LOTUS: Evasive and Resilient Backdoor Attacks through Sub-Partitioning [49.174341192722615]
Backdoor attack poses a significant security threat to Deep Learning applications.
Recent papers have introduced attacks using sample-specific invisible triggers crafted through special transformation functions.
We introduce a novel backdoor attack LOTUS to address both evasiveness and resilience.
arXiv Detail & Related papers (2024-03-25T21:01:29Z) - BadCLIP: Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive
Learning [85.2564206440109]
This paper reveals the threats in this practical scenario that backdoor attacks can remain effective even after defenses.
We introduce the emphtoolns attack, which is resistant to backdoor detection and model fine-tuning defenses.
arXiv Detail & Related papers (2023-11-20T02:21:49Z) - Mitigating Backdoor Poisoning Attacks through the Lens of Spurious
Correlation [43.75579468533781]
backdoors can be implanted through crafting training instances with a specific trigger and a target label.
This paper posits that backdoor poisoning attacks exhibit emphspurious correlation between simple text features and classification labels.
Our empirical study reveals that the malicious triggers are highly correlated to their target labels.
arXiv Detail & Related papers (2023-05-19T11:18:20Z) - Backdoor Attack with Sparse and Invisible Trigger [57.41876708712008]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
backdoor attack is an emerging yet threatening training-phase threat.
We propose a sparse and invisible backdoor attack (SIBA)
arXiv Detail & Related papers (2023-05-11T10:05:57Z) - Improving Adversarial Robustness to Sensitivity and Invariance Attacks
with Deep Metric Learning [80.21709045433096]
A standard method in adversarial robustness assumes a framework to defend against samples crafted by minimally perturbing a sample.
We use metric learning to frame adversarial regularization as an optimal transport problem.
Our preliminary results indicate that regularizing over invariant perturbations in our framework improves both invariant and sensitivity defense.
arXiv Detail & Related papers (2022-11-04T13:54:02Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - Marksman Backdoor: Backdoor Attacks with Arbitrary Target Class [17.391987602738606]
In recent years, machine learning models have been shown to be vulnerable to backdoor attacks.
This paper exploits a novel backdoor attack with a much more powerful payload, denoted as Marksman.
We show empirically that the proposed framework achieves high attack performance while preserving the clean-data performance in several benchmark datasets.
arXiv Detail & Related papers (2022-10-17T15:46:57Z) - Adaptive Feature Alignment for Adversarial Training [56.17654691470554]
CNNs are typically vulnerable to adversarial attacks, which pose a threat to security-sensitive applications.
We propose the adaptive feature alignment (AFA) to generate features of arbitrary attacking strengths.
Our method is trained to automatically align features of arbitrary attacking strength.
arXiv Detail & Related papers (2021-05-31T17:01:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.