SFIBA: Spatial-based Full-target Invisible Backdoor Attacks
- URL: http://arxiv.org/abs/2504.21052v1
- Date: Tue, 29 Apr 2025 05:28:12 GMT
- Title: SFIBA: Spatial-based Full-target Invisible Backdoor Attacks
- Authors: Yangxu Yin, Honglong Chen, Yudong Gao, Peng Sun, Zhishuai Li, Weifeng Liu,
- Abstract summary: Multi-target backdoor attacks pose significant security threats to deep neural networks.<n>We propose a Spatial-based Full-target Invisible Backdoor Attack, called SFIBA.<n>We show that SFIBA can achieve excellent attack performance and stealthiness, while preserving the model's performance on benign samples.
- Score: 9.124060365358748
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-target backdoor attacks pose significant security threats to deep neural networks, as they can preset multiple target classes through a single backdoor injection. This allows attackers to control the model to misclassify poisoned samples with triggers into any desired target class during inference, exhibiting superior attack performance compared with conventional backdoor attacks. However, existing multi-target backdoor attacks fail to guarantee trigger specificity and stealthiness in black-box settings, resulting in two main issues. First, they are unable to simultaneously target all classes when only training data can be manipulated, limiting their effectiveness in realistic attack scenarios. Second, the triggers often lack visual imperceptibility, making poisoned samples easy to detect. To address these problems, we propose a Spatial-based Full-target Invisible Backdoor Attack, called SFIBA. It restricts triggers for different classes to specific local spatial regions and morphologies in the pixel space to ensure specificity, while employing a frequency-domain-based trigger injection method to guarantee stealthiness. Specifically, for injection of each trigger, we first apply fast fourier transform to obtain the amplitude spectrum of clean samples in local spatial regions. Then, we employ discrete wavelet transform to extract the features from the amplitude spectrum and use singular value decomposition to integrate the trigger. Subsequently, we selectively filter parts of the trigger in pixel space to implement trigger morphology constraints and adjust injection coefficients based on visual effects. We conduct experiments on multiple datasets and models. The results demonstrate that SFIBA can achieve excellent attack performance and stealthiness, while preserving the model's performance on benign samples, and can also bypass existing backdoor defenses.
Related papers
- FFCBA: Feature-based Full-target Clean-label Backdoor Attacks [10.650796825194337]
Backdoor attacks pose a significant threat to deep neural networks.<n>Clean-label attacks are more stealthy, as they avoid modifying the labels of poisoned samples.<n>We propose the Feature-based Full-target Clean-label Backdoor Attacks (FFCBA)
arXiv Detail & Related papers (2025-04-29T05:49:42Z) - Stealthy Patch-Wise Backdoor Attack in 3D Point Cloud via Curvature Awareness [52.07366900097567]
Backdoor attacks pose a severe threat to deep neural networks (DNN)<n>Existing 3D point cloud backdoor attacks primarily rely on sample-wise global modifications.<n>We propose Stealthy Patch-Wise Backdoor Attack (SPBA), which employs the first patch-wise trigger for 3D point clouds.
arXiv Detail & Related papers (2025-03-12T12:30:59Z) - Robust and Transferable Backdoor Attacks Against Deep Image Compression With Selective Frequency Prior [118.92747171905727]
This paper introduces a novel frequency-based trigger injection model for launching backdoor attacks with multiple triggers on learned image compression models.<n>We design attack objectives tailored to diverse scenarios, including: 1) degrading compression quality in terms of bit-rate and reconstruction accuracy; 2) targeting task-driven measures like face recognition and semantic segmentation.<n>Experiments show that our trigger injection models, combined with minor modifications to encoder parameters, successfully inject multiple backdoors and their triggers into a single compression model.
arXiv Detail & Related papers (2024-12-02T15:58:40Z) - LADDER: Multi-objective Backdoor Attack via Evolutionary Algorithm [11.95174457001938]
This work proposes a multiobjective blackbox backdoor attack in dual domains via evolutionary algorithm (LADDER)<n>In particular, we formulate LADDER as a multi-objective optimization problem (MOP) and solve it via multi-objective evolutionary algorithm (MOEA)<n>Experiments comprehensively showcase that LADDER attack effectiveness of at least 99%, attack robustness with 90.23%, superior natural stealthiness (1.12x to 196.74x improvement) and excellent spectral stealthiness (8.45x enhancement) as compared to current stealthy attacks by the average $l$-norm across 5 public datasets.
arXiv Detail & Related papers (2024-11-28T11:50:23Z) - Twin Trigger Generative Networks for Backdoor Attacks against Object Detection [14.578800906364414]
Object detectors, which are widely used in real-world applications, are vulnerable to backdoor attacks.
Most research on backdoor attacks has focused on image classification, with limited investigation into object detection.
We propose novel twin trigger generative networks to generate invisible triggers for implanting backdoors into models during training, and visible triggers for steady activation during inference.
arXiv Detail & Related papers (2024-11-23T03:46:45Z) - NoiseAttack: An Evasive Sample-Specific Multi-Targeted Backdoor Attack Through White Gaussian Noise [0.19820694575112383]
Backdoor attacks pose a significant threat when using third-party data for deep learning development.
We introduce a novel sample-specific multi-targeted backdoor attack, namely NoiseAttack.
This work is the first of its kind to launch a vision backdoor attack with the intent to generate multiple targeted classes.
arXiv Detail & Related papers (2024-09-03T19:24:46Z) - LOTUS: Evasive and Resilient Backdoor Attacks through Sub-Partitioning [49.174341192722615]
Backdoor attack poses a significant security threat to Deep Learning applications.
Recent papers have introduced attacks using sample-specific invisible triggers crafted through special transformation functions.
We introduce a novel backdoor attack LOTUS to address both evasiveness and resilience.
arXiv Detail & Related papers (2024-03-25T21:01:29Z) - Does Few-shot Learning Suffer from Backdoor Attacks? [63.9864247424967]
We show that few-shot learning can still be vulnerable to backdoor attacks.
Our method demonstrates a high Attack Success Rate (ASR) in FSL tasks with different few-shot learning paradigms.
This study reveals that few-shot learning still suffers from backdoor attacks, and its security should be given attention.
arXiv Detail & Related papers (2023-12-31T06:43:36Z) - FTA: Stealthy and Adaptive Backdoor Attack with Flexible Triggers on
Federated Learning [11.636353298724574]
We propose a new stealthy and robust backdoor attack against federated learning (FL) defenses.
We build a generative trigger function that can learn to manipulate benign samples with an imperceptible flexible trigger pattern.
Our trigger generator can keep learning and adapt across different rounds, allowing it to adjust to changes in the global model.
arXiv Detail & Related papers (2023-08-31T20:25:54Z) - Backdoor Attack with Sparse and Invisible Trigger [57.41876708712008]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
backdoor attack is an emerging yet threatening training-phase threat.
We propose a sparse and invisible backdoor attack (SIBA)
arXiv Detail & Related papers (2023-05-11T10:05:57Z) - Influencer Backdoor Attack on Semantic Segmentation [39.57965442338681]
Influencer Backdoor Attack (IBA) is a backdoor attack on semantic segmentation models.
IBA is expected to maintain the classification accuracy of non-victim pixels and mislead classifications of all victim pixels in every single inference.
We introduce an innovative Pixel Random Labeling strategy which maintains optimal performance even when the trigger is placed far from the victim pixels.
arXiv Detail & Related papers (2023-03-21T17:45:38Z) - Hidden Backdoor Attack against Semantic Segmentation Models [60.0327238844584]
The emphbackdoor attack intends to embed hidden backdoors in deep neural networks (DNNs) by poisoning training data.
We propose a novel attack paradigm, the emphfine-grained attack, where we treat the target label from the object-level instead of the image-level.
Experiments show that the proposed methods can successfully attack semantic segmentation models by poisoning only a small proportion of training data.
arXiv Detail & Related papers (2021-03-06T05:50:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.