Check Your Other Door! Establishing Backdoor Attacks in the Frequency
Domain
- URL: http://arxiv.org/abs/2109.05507v1
- Date: Sun, 12 Sep 2021 12:44:52 GMT
- Title: Check Your Other Door! Establishing Backdoor Attacks in the Frequency
Domain
- Authors: Hasan Abed Al Kader Hammoud, Bernard Ghanem
- Abstract summary: We show the advantages of utilizing the frequency domain for establishing undetectable and powerful backdoor attacks.
We also show two possible defences that succeed against frequency-based backdoor attacks and possible ways for the attacker to bypass them.
- Score: 80.24811082454367
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep Neural Networks (DNNs) have been utilized in various applications
ranging from image classification and facial recognition to medical imagery
analysis and real-time object detection. As our models become more
sophisticated and complex, the computational cost of training such models
becomes a burden for small companies and individuals; for this reason,
outsourcing the training process has been the go-to option for such users.
Unfortunately, outsourcing the training process comes at the cost of
vulnerability to backdoor attacks. These attacks aim at establishing hidden
backdoors in the DNN such that the model performs well on benign samples but
outputs a particular target label when a trigger is applied to the input.
Current backdoor attacks rely on generating triggers in the image/pixel domain;
however, as we show in this paper, it is not the only domain to exploit and one
should always "check the other doors". In this work, we propose a complete
pipeline for generating a dynamic, efficient, and invisible backdoor attack in
the frequency domain. We show the advantages of utilizing the frequency domain
for establishing undetectable and powerful backdoor attacks through extensive
experiments on various datasets and network architectures. The backdoored
models are shown to break various state-of-the-art defences. We also show two
possible defences that succeed against frequency-based backdoor attacks and
possible ways for the attacker to bypass them. We conclude the work with some
remarks regarding a network's learning capacity and the capability of embedding
a backdoor attack in the model.
Related papers
- Exploiting the Vulnerability of Large Language Models via Defense-Aware Architectural Backdoor [0.24335447922683692]
We introduce a new type of backdoor attack that conceals itself within the underlying model architecture.
The add-on modules of model architecture layers can detect the presence of input trigger tokens and modify layer weights.
We conduct extensive experiments to evaluate our attack methods using two model architecture settings on five different large language datasets.
arXiv Detail & Related papers (2024-09-03T14:54:16Z) - Look, Listen, and Attack: Backdoor Attacks Against Video Action
Recognition [53.720010650445516]
We show that poisoned-label image backdoor attacks could be extended temporally in two ways, statically and dynamically.
In addition, we explore natural video backdoors to highlight the seriousness of this vulnerability in the video domain.
And, for the first time, we study multi-modal (audiovisual) backdoor attacks against video action recognition models.
arXiv Detail & Related papers (2023-01-03T07:40:28Z) - Untargeted Backdoor Attack against Object Detection [69.63097724439886]
We design a poison-only backdoor attack in an untargeted manner, based on task characteristics.
We show that, once the backdoor is embedded into the target model by our attack, it can trick the model to lose detection of any object stamped with our trigger patterns.
arXiv Detail & Related papers (2022-11-02T17:05:45Z) - BATT: Backdoor Attack with Transformation-based Triggers [72.61840273364311]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
Backdoor adversaries inject hidden backdoors that can be activated by adversary-specified trigger patterns.
One recent research revealed that most of the existing attacks failed in the real physical world.
arXiv Detail & Related papers (2022-11-02T16:03:43Z) - Architectural Backdoors in Neural Networks [27.315196801989032]
We introduce a new class of backdoor attacks that hide inside model architectures.
These backdoors are simple to implement, for instance by publishing open-source code for a backdoored model architecture.
We demonstrate that model architectural backdoors represent a real threat and, unlike other approaches, can survive a complete re-training from scratch.
arXiv Detail & Related papers (2022-06-15T22:44:03Z) - Backdoor Attack in the Physical World [49.64799477792172]
Backdoor attack intends to inject hidden backdoor into the deep neural networks (DNNs)
Most existing backdoor attacks adopted the setting of static trigger, $i.e.,$ triggers across the training and testing images.
We demonstrate that this attack paradigm is vulnerable when the trigger in testing images is not consistent with the one used for training.
arXiv Detail & Related papers (2021-04-06T08:37:33Z) - Black-box Detection of Backdoor Attacks with Limited Information and
Data [56.0735480850555]
We propose a black-box backdoor detection (B3D) method to identify backdoor attacks with only query access to the model.
In addition to backdoor detection, we also propose a simple strategy for reliable predictions using the identified backdoored models.
arXiv Detail & Related papers (2021-03-24T12:06:40Z) - WaNet -- Imperceptible Warping-based Backdoor Attack [20.289889150949836]
A third-party model can be poisoned in training to work well in normal conditions but behave maliciously when a trigger pattern appears.
In this paper, we propose using warping-based triggers to attack third-party models.
The proposed backdoor outperforms the previous methods in a human inspection test by a wide margin, proving its stealthiness.
arXiv Detail & Related papers (2021-02-20T15:25:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.