A Dual Stealthy Backdoor: From Both Spatial and Frequency Perspectives
- URL: http://arxiv.org/abs/2307.10184v1
- Date: Mon, 3 Jul 2023 12:28:44 GMT
- Title: A Dual Stealthy Backdoor: From Both Spatial and Frequency Perspectives
- Authors: Yudong Gao, Honglong Chen, Peng Sun, Junjian Li, Anqing Zhang, Zhibo
Wang
- Abstract summary: Backdoor attacks pose serious security threats to deep neural networks (DNNs)
Backdoored models make arbitrarily (targeted) incorrect predictions on inputs embedded with well-designed triggers.
We propose a DUal stealthy BAckdoor attack method named DUBA, which simultaneously considers the invisibility of triggers in both the spatial and frequency domains.
- Score: 17.024143511814245
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Backdoor attacks pose serious security threats to deep neural networks
(DNNs). Backdoored models make arbitrarily (targeted) incorrect predictions on
inputs embedded with well-designed triggers while behaving normally on clean
inputs. Many works have explored the invisibility of backdoor triggers to
improve attack stealthiness. However, most of them only consider the
invisibility in the spatial domain without explicitly accounting for the
generation of invisible triggers in the frequency domain, making the generated
poisoned images be easily detected by recent defense methods. To address this
issue, in this paper, we propose a DUal stealthy BAckdoor attack method named
DUBA, which simultaneously considers the invisibility of triggers in both the
spatial and frequency domains, to achieve desirable attack performance, while
ensuring strong stealthiness. Specifically, we first use Discrete Wavelet
Transform to embed the high-frequency information of the trigger image into the
clean image to ensure attack effectiveness. Then, to attain strong
stealthiness, we incorporate Fourier Transform and Discrete Cosine Transform to
mix the poisoned image and clean image in the frequency domain. Moreover, the
proposed DUBA adopts a novel attack strategy, in which the model is trained
with weak triggers and attacked with strong triggers to further enhance the
attack performance and stealthiness. We extensively evaluate DUBA against
popular image classifiers on four datasets. The results demonstrate that it
significantly outperforms the state-of-the-art backdoor attacks in terms of the
attack success rate and stealthiness
Related papers
- An Invisible Backdoor Attack Based On Semantic Feature [0.0]
Backdoor attacks have severely threatened deep neural network (DNN) models in the past several years.
We propose a novel backdoor attack, making imperceptible changes.
We evaluate our attack on three prominent image classification datasets.
arXiv Detail & Related papers (2024-05-19T13:50:40Z) - Meta Invariance Defense Towards Generalizable Robustness to Unknown Adversarial Attacks [62.036798488144306]
Current defense mainly focuses on the known attacks, but the adversarial robustness to the unknown attacks is seriously overlooked.
We propose an attack-agnostic defense method named Meta Invariance Defense (MID)
We show that MID simultaneously achieves robustness to the imperceptible adversarial perturbations in high-level image classification and attack-suppression in low-level robust image regeneration.
arXiv Detail & Related papers (2024-04-04T10:10:38Z) - Invisible Backdoor Attack Through Singular Value Decomposition [2.681558084723648]
backdoor attacks pose a serious security threat to deep neural networks (DNNs)
To make triggers less perceptible and imperceptible, various invisible backdoor attacks have been proposed.
This paper proposes an invisible backdoor attack called DEBA.
arXiv Detail & Related papers (2024-03-18T13:25:12Z) - Backdoor Attack with Sparse and Invisible Trigger [57.41876708712008]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
backdoor attack is an emerging yet threatening training-phase threat.
We propose a sparse and invisible backdoor attack (SIBA)
arXiv Detail & Related papers (2023-05-11T10:05:57Z) - SATBA: An Invisible Backdoor Attack Based On Spatial Attention [7.405457329942725]
Backdoor attacks involve the training of Deep Neural Network (DNN) on datasets that contain hidden trigger patterns.
Most existing backdoor attacks suffer from two significant drawbacks: their trigger patterns are visible and easy to detect by backdoor defense or even human inspection.
We propose a novel backdoor attack named SATBA that overcomes these limitations using spatial attention and an U-net based model.
arXiv Detail & Related papers (2023-02-25T10:57:41Z) - Enhancing Clean Label Backdoor Attack with Two-phase Specific Triggers [6.772389744240447]
We propose a two-phase and image-specific triggers generation method to enhance clean-label backdoor attacks.
Our approach can achieve a fantastic attack success rate(98.98%) with low poisoning rate, high stealthiness under many evaluation metrics and is resistant to backdoor defense methods.
arXiv Detail & Related papers (2022-06-10T05:34:06Z) - Backdoor Attack through Frequency Domain [17.202855245008227]
We propose a new backdoor attack FTROJAN through trojaning the frequency domain.
The key intuition is that triggering perturbations in the frequency domain correspond to small pixel-wise perturbations dispersed across the entire image.
We evaluate FTROJAN in several datasets and tasks showing that it achieves a high attack success rate without significantly degrading the prediction accuracy on benign inputs.
arXiv Detail & Related papers (2021-11-22T05:13:12Z) - Backdoor Attack on Hash-based Image Retrieval via Clean-label Data
Poisoning [54.15013757920703]
We propose the confusing perturbations-induced backdoor attack (CIBA)
It injects a small number of poisoned images with the correct label into the training data.
We have conducted extensive experiments to verify the effectiveness of our proposed CIBA.
arXiv Detail & Related papers (2021-09-18T07:56:59Z) - Backdoor Attack in the Physical World [49.64799477792172]
Backdoor attack intends to inject hidden backdoor into the deep neural networks (DNNs)
Most existing backdoor attacks adopted the setting of static trigger, $i.e.,$ triggers across the training and testing images.
We demonstrate that this attack paradigm is vulnerable when the trigger in testing images is not consistent with the one used for training.
arXiv Detail & Related papers (2021-04-06T08:37:33Z) - Online Alternate Generator against Adversarial Attacks [144.45529828523408]
Deep learning models are notoriously sensitive to adversarial examples which are synthesized by adding quasi-perceptible noises on real images.
We propose a portable defense method, online alternate generator, which does not need to access or modify the parameters of the target networks.
The proposed method works by online synthesizing another image from scratch for an input image, instead of removing or destroying adversarial noises.
arXiv Detail & Related papers (2020-09-17T07:11:16Z) - Rethinking the Trigger of Backdoor Attack [83.98031510668619]
Currently, most of existing backdoor attacks adopted the setting of emphstatic trigger, $i.e.,$ triggers across the training and testing images follow the same appearance and are located in the same area.
We demonstrate that such an attack paradigm is vulnerable when the trigger in testing images is not consistent with the one used for training.
arXiv Detail & Related papers (2020-04-09T17:19:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.