Temporal Shuffling for Defending Deep Action Recognition Models against
Adversarial Attacks
- URL: http://arxiv.org/abs/2112.07921v2
- Date: Thu, 7 Dec 2023 09:59:23 GMT
- Title: Temporal Shuffling for Defending Deep Action Recognition Models against
Adversarial Attacks
- Authors: Jaehui Hwang, Huan Zhang, Jun-Ho Choi, Cho-Jui Hsieh, and Jong-Seok
Lee
- Abstract summary: We develop a novel defense method using temporal shuffling of input videos against adversarial attacks for action recognition models.
To the best of our knowledge, this is the first attempt to design a defense method without additional training for 3D CNN-based video action recognition models.
- Score: 67.58887471137436
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, video-based action recognition methods using convolutional neural
networks (CNNs) achieve remarkable recognition performance. However, there is
still lack of understanding about the generalization mechanism of action
recognition models. In this paper, we suggest that action recognition models
rely on the motion information less than expected, and thus they are robust to
randomization of frame orders. Furthermore, we find that motion monotonicity
remaining after randomization also contributes to such robustness. Based on
this observation, we develop a novel defense method using temporal shuffling of
input videos against adversarial attacks for action recognition models. Another
observation enabling our defense method is that adversarial perturbations on
videos are sensitive to temporal destruction. To the best of our knowledge,
this is the first attempt to design a defense method without additional
training for 3D CNN-based video action recognition models.
Related papers
- Time-Aware Face Anti-Spoofing with Rotation Invariant Local Binary Patterns and Deep Learning [50.79277723970418]
imitation attacks can lead to erroneous identification and subsequent authentication of attackers.
Similar to face recognition, imitation attacks can also be detected with Machine Learning.
We propose a novel approach that promises high classification accuracy by combining previously unused features with time-aware deep learning strategies.
arXiv Detail & Related papers (2024-08-27T07:26:10Z) - Temporal-Distributed Backdoor Attack Against Video Based Action
Recognition [21.916002204426853]
We introduce a simple yet effective backdoor attack against video data.
Our proposed attack, adding perturbations in a transformed domain, plants an imperceptible, temporally distributed trigger across the video frames.
arXiv Detail & Related papers (2023-08-21T22:31:54Z) - DirecFormer: A Directed Attention in Transformer Approach to Robust
Action Recognition [22.649489578944838]
This work presents a novel end-to-end Transformer-based Directed Attention framework for robust action recognition.
The contributions of this work are three-fold. Firstly, we introduce the problem of ordered temporal learning issues to the action recognition problem.
Secondly, a new Directed Attention mechanism is introduced to understand and provide attentions to human actions in the right order.
arXiv Detail & Related papers (2022-03-19T03:41:48Z) - Attacking Video Recognition Models with Bullet-Screen Comments [79.53159486470858]
We introduce a novel adversarial attack, which attacks video recognition models with bullet-screen comment (BSC) attacks.
BSCs can be regarded as a kind of meaningful patch, adding it to a clean video will not affect people' s understanding of the video content, nor will arouse people' s suspicion.
arXiv Detail & Related papers (2021-10-29T08:55:50Z) - PAT: Pseudo-Adversarial Training For Detecting Adversarial Videos [20.949656274807904]
We propose a novel yet simple algorithm called Pseudo-versa-Adrial Training (PAT) to detect the adversarial frames in a video without requiring knowledge of the attack.
Experimental results on UCF-101 and 20BN-Jester datasets show that PAT can detect the adversarial video frames and videos with a high detection rate.
arXiv Detail & Related papers (2021-09-13T04:05:46Z) - Just One Moment: Inconspicuous One Frame Attack on Deep Action
Recognition [34.925573731184514]
We study the vulnerability of deep learning-based action recognition methods against the adversarial attack.
We present a new one frame attack that adds an inconspicuous perturbation to only a single frame of a given video clip.
Our method shows high fooling rates and produces hardly perceivable perturbation to human observers.
arXiv Detail & Related papers (2020-11-30T07:11:56Z) - Motion-Excited Sampler: Video Adversarial Attack with Sparked Prior [63.11478060678794]
We propose an effective motion-excited sampler to obtain motion-aware noise prior.
By using the sparked prior in gradient estimation, we can successfully attack a variety of video classification models with fewer number of queries.
arXiv Detail & Related papers (2020-03-17T10:54:12Z) - Temporal Sparse Adversarial Attack on Sequence-based Gait Recognition [56.844587127848854]
We demonstrate that the state-of-the-art gait recognition model is vulnerable to such attacks.
We employ a generative adversarial network based architecture to semantically generate adversarial high-quality gait silhouettes or video frames.
The experimental results show that if only one-fortieth of the frames are attacked, the accuracy of the target model drops dramatically.
arXiv Detail & Related papers (2020-02-22T10:08:42Z) - Adversarial vs behavioural-based defensive AI with joint, continual and
active learning: automated evaluation of robustness to deception, poisoning
and concept drift [62.997667081978825]
Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security.
In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise.
arXiv Detail & Related papers (2020-01-13T13:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.