Look, Listen, and Attack: Backdoor Attacks Against Video Action
Recognition
- URL: http://arxiv.org/abs/2301.00986v1
- Date: Tue, 3 Jan 2023 07:40:28 GMT
- Title: Look, Listen, and Attack: Backdoor Attacks Against Video Action
Recognition
- Authors: Hasan Abed Al Kader Hammoud, Shuming Liu, Mohammad Alkhrasi, Fahad
AlBalawi, Bernard Ghanem
- Abstract summary: We show that poisoned-label image backdoor attacks could be extended temporally in two ways, statically and dynamically.
In addition, we explore natural video backdoors to highlight the seriousness of this vulnerability in the video domain.
And, for the first time, we study multi-modal (audiovisual) backdoor attacks against video action recognition models.
- Score: 53.720010650445516
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks (DNNs) are vulnerable to a class of attacks called
"backdoor attacks", which create an association between a backdoor trigger and
a target label the attacker is interested in exploiting. A backdoored DNN
performs well on clean test images, yet persistently predicts an
attacker-defined label for any sample in the presence of the backdoor trigger.
Although backdoor attacks have been extensively studied in the image domain,
there are very few works that explore such attacks in the video domain, and
they tend to conclude that image backdoor attacks are less effective in the
video domain. In this work, we revisit the traditional backdoor threat model
and incorporate additional video-related aspects to that model. We show that
poisoned-label image backdoor attacks could be extended temporally in two ways,
statically and dynamically, leading to highly effective attacks in the video
domain. In addition, we explore natural video backdoors to highlight the
seriousness of this vulnerability in the video domain. And, for the first time,
we study multi-modal (audiovisual) backdoor attacks against video action
recognition models, where we show that attacking a single modality is enough
for achieving a high attack success rate.
Related papers
- Temporal-Distributed Backdoor Attack Against Video Based Action
Recognition [21.916002204426853]
We introduce a simple yet effective backdoor attack against video data.
Our proposed attack, adding perturbations in a transformed domain, plants an imperceptible, temporally distributed trigger across the video frames.
arXiv Detail & Related papers (2023-08-21T22:31:54Z) - Check Your Other Door! Establishing Backdoor Attacks in the Frequency
Domain [80.24811082454367]
We show the advantages of utilizing the frequency domain for establishing undetectable and powerful backdoor attacks.
We also show two possible defences that succeed against frequency-based backdoor attacks and possible ways for the attacker to bypass them.
arXiv Detail & Related papers (2021-09-12T12:44:52Z) - Backdoor Attack in the Physical World [49.64799477792172]
Backdoor attack intends to inject hidden backdoor into the deep neural networks (DNNs)
Most existing backdoor attacks adopted the setting of static trigger, $i.e.,$ triggers across the training and testing images.
We demonstrate that this attack paradigm is vulnerable when the trigger in testing images is not consistent with the one used for training.
arXiv Detail & Related papers (2021-04-06T08:37:33Z) - Rethinking the Trigger of Backdoor Attack [83.98031510668619]
Currently, most of existing backdoor attacks adopted the setting of emphstatic trigger, $i.e.,$ triggers across the training and testing images follow the same appearance and are located in the same area.
We demonstrate that such an attack paradigm is vulnerable when the trigger in testing images is not consistent with the one used for training.
arXiv Detail & Related papers (2020-04-09T17:19:37Z) - Clean-Label Backdoor Attacks on Video Recognition Models [87.46539956587908]
We show that image backdoor attacks are far less effective on videos.
We propose the use of a universal adversarial trigger as the backdoor trigger to attack video recognition models.
Our proposed backdoor attack is resistant to state-of-the-art backdoor defense/detection methods.
arXiv Detail & Related papers (2020-03-06T04:51:48Z) - Defending against Backdoor Attack on Deep Neural Networks [98.45955746226106]
We study the so-called textitbackdoor attack, which injects a backdoor trigger to a small portion of training data.
Experiments show that our method could effectively decrease the attack success rate, and also hold a high classification accuracy for clean images.
arXiv Detail & Related papers (2020-02-26T02:03:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.