Temporal-Distributed Backdoor Attack Against Video Based Action
Recognition
- URL: http://arxiv.org/abs/2308.11070v3
- Date: Sat, 9 Dec 2023 15:00:03 GMT
- Title: Temporal-Distributed Backdoor Attack Against Video Based Action
Recognition
- Authors: Xi Li, Songhe Wang, Ruiquan Huang, Mahanth Gowda, George Kesidis
- Abstract summary: We introduce a simple yet effective backdoor attack against video data.
Our proposed attack, adding perturbations in a transformed domain, plants an imperceptible, temporally distributed trigger across the video frames.
- Score: 21.916002204426853
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks (DNNs) have achieved tremendous success in various
applications including video action recognition, yet remain vulnerable to
backdoor attacks (Trojans). The backdoor-compromised model will mis-classify to
the target class chosen by the attacker when a test instance (from a non-target
class) is embedded with a specific trigger, while maintaining high accuracy on
attack-free instances. Although there are extensive studies on backdoor attacks
against image data, the susceptibility of video-based systems under backdoor
attacks remains largely unexplored. Current studies are direct extensions of
approaches proposed for image data, e.g., the triggers are independently
embedded within the frames, which tend to be detectable by existing defenses.
In this paper, we introduce a simple yet effective backdoor attack against
video data. Our proposed attack, adding perturbations in a transformed domain,
plants an imperceptible, temporally distributed trigger across the video
frames, and is shown to be resilient to existing defensive strategies. The
effectiveness of the proposed attack is demonstrated by extensive experiments
with various well-known models on two video recognition benchmarks, UCF101 and
HMDB51, and a sign language recognition benchmark, Greek Sign Language (GSL)
dataset. We delve into the impact of several influential factors on our
proposed attack and identify an intriguing effect termed "collateral damage"
through extensive studies.
Related papers
- BadCLIP: Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive
Learning [85.2564206440109]
This paper reveals the threats in this practical scenario that backdoor attacks can remain effective even after defenses.
We introduce the emphtoolns attack, which is resistant to backdoor detection and model fine-tuning defenses.
arXiv Detail & Related papers (2023-11-20T02:21:49Z) - Backdoor Attack with Sparse and Invisible Trigger [57.41876708712008]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
backdoor attack is an emerging yet threatening training-phase threat.
We propose a sparse and invisible backdoor attack (SIBA)
arXiv Detail & Related papers (2023-05-11T10:05:57Z) - Look, Listen, and Attack: Backdoor Attacks Against Video Action
Recognition [53.720010650445516]
We show that poisoned-label image backdoor attacks could be extended temporally in two ways, statically and dynamically.
In addition, we explore natural video backdoors to highlight the seriousness of this vulnerability in the video domain.
And, for the first time, we study multi-modal (audiovisual) backdoor attacks against video action recognition models.
arXiv Detail & Related papers (2023-01-03T07:40:28Z) - Untargeted Backdoor Attack against Object Detection [69.63097724439886]
We design a poison-only backdoor attack in an untargeted manner, based on task characteristics.
We show that, once the backdoor is embedded into the target model by our attack, it can trick the model to lose detection of any object stamped with our trigger patterns.
arXiv Detail & Related papers (2022-11-02T17:05:45Z) - On the Effectiveness of Adversarial Training against Backdoor Attacks [111.8963365326168]
A backdoored model always predicts a target class in the presence of a predefined trigger pattern.
In general, adversarial training is believed to defend against backdoor attacks.
We propose a hybrid strategy which provides satisfactory robustness across different backdoor attacks.
arXiv Detail & Related papers (2022-02-22T02:24:46Z) - PAT: Pseudo-Adversarial Training For Detecting Adversarial Videos [20.949656274807904]
We propose a novel yet simple algorithm called Pseudo-versa-Adrial Training (PAT) to detect the adversarial frames in a video without requiring knowledge of the attack.
Experimental results on UCF-101 and 20BN-Jester datasets show that PAT can detect the adversarial video frames and videos with a high detection rate.
arXiv Detail & Related papers (2021-09-13T04:05:46Z) - Clean-Label Backdoor Attacks on Video Recognition Models [87.46539956587908]
We show that image backdoor attacks are far less effective on videos.
We propose the use of a universal adversarial trigger as the backdoor trigger to attack video recognition models.
Our proposed backdoor attack is resistant to state-of-the-art backdoor defense/detection methods.
arXiv Detail & Related papers (2020-03-06T04:51:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.