Dynamic Backdoor Attacks Against Machine Learning Models
- URL: http://arxiv.org/abs/2003.03675v2
- Date: Thu, 3 Mar 2022 19:04:16 GMT
- Title: Dynamic Backdoor Attacks Against Machine Learning Models
- Authors: Ahmed Salem and Rui Wen and Michael Backes and Shiqing Ma and Yang
Zhang
- Abstract summary: We propose the first class of dynamic backdooring techniques against deep neural networks (DNN), namely Random Backdoor, Backdoor Generating Network (BaN), and conditional Backdoor Generating Network (c-BaN)
BaN and c-BaN based on a novel generative network are the first two schemes that algorithmically generate triggers.
Our techniques achieve almost perfect attack performance on backdoored data with a negligible utility loss.
- Score: 28.799895653866788
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning (ML) has made tremendous progress during the past decade and
is being adopted in various critical real-world applications. However, recent
research has shown that ML models are vulnerable to multiple security and
privacy attacks. In particular, backdoor attacks against ML models have
recently raised a lot of awareness. A successful backdoor attack can cause
severe consequences, such as allowing an adversary to bypass critical
authentication systems.
Current backdooring techniques rely on adding static triggers (with fixed
patterns and locations) on ML model inputs which are prone to detection by the
current backdoor detection mechanisms. In this paper, we propose the first
class of dynamic backdooring techniques against deep neural networks (DNN),
namely Random Backdoor, Backdoor Generating Network (BaN), and conditional
Backdoor Generating Network (c-BaN). Triggers generated by our techniques can
have random patterns and locations, which reduce the efficacy of the current
backdoor detection mechanisms. In particular, BaN and c-BaN based on a novel
generative network are the first two schemes that algorithmically generate
triggers. Moreover, c-BaN is the first conditional backdooring technique that
given a target label, it can generate a target-specific trigger. Both BaN and
c-BaN are essentially a general framework which renders the adversary the
flexibility for further customizing backdoor attacks.
We extensively evaluate our techniques on three benchmark datasets: MNIST,
CelebA, and CIFAR-10. Our techniques achieve almost perfect attack performance
on backdoored data with a negligible utility loss. We further show that our
techniques can bypass current state-of-the-art defense mechanisms against
backdoor attacks, including ABS, Februus, MNTD, Neural Cleanse, and STRIP.
Related papers
- Reconstructive Neuron Pruning for Backdoor Defense [96.21882565556072]
We propose a novel defense called emphReconstructive Neuron Pruning (RNP) to expose and prune backdoor neurons.
In RNP, unlearning is operated at the neuron level while recovering is operated at the filter level, forming an asymmetric reconstructive learning procedure.
We show that such an asymmetric process on only a few clean samples can effectively expose and prune the backdoor neurons implanted by a wide range of attacks.
arXiv Detail & Related papers (2023-05-24T08:29:30Z) - Backdoor Attack with Sparse and Invisible Trigger [57.41876708712008]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
backdoor attack is an emerging yet threatening training-phase threat.
We propose a sparse and invisible backdoor attack (SIBA)
arXiv Detail & Related papers (2023-05-11T10:05:57Z) - Backdoor Defense via Deconfounded Representation Learning [17.28760299048368]
We propose a Causality-inspired Backdoor Defense (CBD) to learn deconfounded representations for reliable classification.
CBD is effective in reducing backdoor threats while maintaining high accuracy in predicting benign samples.
arXiv Detail & Related papers (2023-03-13T02:25:59Z) - An anomaly detection approach for backdoored neural networks: face
recognition as a case study [77.92020418343022]
We propose a novel backdoored network detection method based on the principle of anomaly detection.
We test our method on a novel dataset of backdoored networks and report detectability results with perfect scores.
arXiv Detail & Related papers (2022-08-22T12:14:13Z) - Model-Contrastive Learning for Backdoor Defense [13.781375023320981]
We propose a novel backdoor defense method named MCL based on model-contrastive learning.
MCL is more effective for reducing backdoor threats while maintaining higher accuracy of benign data.
arXiv Detail & Related papers (2022-05-09T16:36:46Z) - Imperceptible Backdoor Attack: From Input Space to Feature
Representation [24.82632240825927]
Backdoor attacks are rapidly emerging threats to deep neural networks (DNNs)
In this paper, we analyze the drawbacks of existing attack approaches and propose a novel imperceptible backdoor attack.
Our trigger only modifies less than 1% pixels of a benign image while the magnitude is 1.
arXiv Detail & Related papers (2022-05-06T13:02:26Z) - Check Your Other Door! Establishing Backdoor Attacks in the Frequency
Domain [80.24811082454367]
We show the advantages of utilizing the frequency domain for establishing undetectable and powerful backdoor attacks.
We also show two possible defences that succeed against frequency-based backdoor attacks and possible ways for the attacker to bypass them.
arXiv Detail & Related papers (2021-09-12T12:44:52Z) - Black-box Detection of Backdoor Attacks with Limited Information and
Data [56.0735480850555]
We propose a black-box backdoor detection (B3D) method to identify backdoor attacks with only query access to the model.
In addition to backdoor detection, we also propose a simple strategy for reliable predictions using the identified backdoored models.
arXiv Detail & Related papers (2021-03-24T12:06:40Z) - WaNet -- Imperceptible Warping-based Backdoor Attack [20.289889150949836]
A third-party model can be poisoned in training to work well in normal conditions but behave maliciously when a trigger pattern appears.
In this paper, we propose using warping-based triggers to attack third-party models.
The proposed backdoor outperforms the previous methods in a human inspection test by a wide margin, proving its stealthiness.
arXiv Detail & Related papers (2021-02-20T15:25:36Z) - Backdoor Learning: A Survey [75.59571756777342]
Backdoor attack intends to embed hidden backdoor into deep neural networks (DNNs)
Backdoor learning is an emerging and rapidly growing research area.
This paper presents the first comprehensive survey of this realm.
arXiv Detail & Related papers (2020-07-17T04:09:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.