Clean-Label Backdoor Attacks on Video Recognition Models
- URL: http://arxiv.org/abs/2003.03030v2
- Date: Tue, 16 Jun 2020 12:13:20 GMT
- Title: Clean-Label Backdoor Attacks on Video Recognition Models
- Authors: Shihao Zhao, Xingjun Ma, Xiang Zheng, James Bailey, Jingjing Chen,
Yu-Gang Jiang
- Abstract summary: We show that image backdoor attacks are far less effective on videos.
We propose the use of a universal adversarial trigger as the backdoor trigger to attack video recognition models.
Our proposed backdoor attack is resistant to state-of-the-art backdoor defense/detection methods.
- Score: 87.46539956587908
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks (DNNs) are vulnerable to backdoor attacks which can hide
backdoor triggers in DNNs by poisoning training data. A backdoored model
behaves normally on clean test images, yet consistently predicts a particular
target class for any test examples that contain the trigger pattern. As such,
backdoor attacks are hard to detect, and have raised severe security concerns
in real-world applications. Thus far, backdoor research has mostly been
conducted in the image domain with image classification models. In this paper,
we show that existing image backdoor attacks are far less effective on videos,
and outline 4 strict conditions where existing attacks are likely to fail: 1)
scenarios with more input dimensions (eg. videos), 2) scenarios with high
resolution, 3) scenarios with a large number of classes and few examples per
class (a "sparse dataset"), and 4) attacks with access to correct labels (eg.
clean-label attacks). We propose the use of a universal adversarial trigger as
the backdoor trigger to attack video recognition models, a situation where
backdoor attacks are likely to be challenged by the above 4 strict conditions.
We show on benchmark video datasets that our proposed backdoor attack can
manipulate state-of-the-art video models with high success rates by poisoning
only a small proportion of training data (without changing the labels). We also
show that our proposed backdoor attack is resistant to state-of-the-art
backdoor defense/detection methods, and can even be applied to improve image
backdoor attacks. Our proposed video backdoor attack not only serves as a
strong baseline for improving the robustness of video models, but also provides
a new perspective for more understanding more powerful backdoor attacks.
Related papers
- PatchBackdoor: Backdoor Attack against Deep Neural Networks without
Model Modification [0.0]
Backdoor attack is a major threat to deep learning systems in safety-critical scenarios.
In this paper, we show that backdoor attacks can be achieved without any model modification.
We implement PatchBackdoor in real-world scenarios and show that the attack is still threatening.
arXiv Detail & Related papers (2023-08-22T23:02:06Z) - Temporal-Distributed Backdoor Attack Against Video Based Action
Recognition [21.916002204426853]
We introduce a simple yet effective backdoor attack against video data.
Our proposed attack, adding perturbations in a transformed domain, plants an imperceptible, temporally distributed trigger across the video frames.
arXiv Detail & Related papers (2023-08-21T22:31:54Z) - Backdoor Attack with Sparse and Invisible Trigger [57.41876708712008]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
backdoor attack is an emerging yet threatening training-phase threat.
We propose a sparse and invisible backdoor attack (SIBA)
arXiv Detail & Related papers (2023-05-11T10:05:57Z) - Look, Listen, and Attack: Backdoor Attacks Against Video Action
Recognition [53.720010650445516]
We show that poisoned-label image backdoor attacks could be extended temporally in two ways, statically and dynamically.
In addition, we explore natural video backdoors to highlight the seriousness of this vulnerability in the video domain.
And, for the first time, we study multi-modal (audiovisual) backdoor attacks against video action recognition models.
arXiv Detail & Related papers (2023-01-03T07:40:28Z) - Check Your Other Door! Establishing Backdoor Attacks in the Frequency
Domain [80.24811082454367]
We show the advantages of utilizing the frequency domain for establishing undetectable and powerful backdoor attacks.
We also show two possible defences that succeed against frequency-based backdoor attacks and possible ways for the attacker to bypass them.
arXiv Detail & Related papers (2021-09-12T12:44:52Z) - Backdoor Attack in the Physical World [49.64799477792172]
Backdoor attack intends to inject hidden backdoor into the deep neural networks (DNNs)
Most existing backdoor attacks adopted the setting of static trigger, $i.e.,$ triggers across the training and testing images.
We demonstrate that this attack paradigm is vulnerable when the trigger in testing images is not consistent with the one used for training.
arXiv Detail & Related papers (2021-04-06T08:37:33Z) - Black-box Detection of Backdoor Attacks with Limited Information and
Data [56.0735480850555]
We propose a black-box backdoor detection (B3D) method to identify backdoor attacks with only query access to the model.
In addition to backdoor detection, we also propose a simple strategy for reliable predictions using the identified backdoored models.
arXiv Detail & Related papers (2021-03-24T12:06:40Z) - Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks [46.99548490594115]
A backdoor attack installs a backdoor into the victim model by injecting a backdoor pattern into a small proportion of the training data.
We propose reflection backdoor (Refool) to plant reflections as backdoor into a victim model.
We demonstrate on 3 computer vision tasks and 5 datasets that, Refool can attack state-of-the-art DNNs with high success rate.
arXiv Detail & Related papers (2020-07-05T13:56:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.