Just One Moment: Inconspicuous One Frame Attack on Deep Action
Recognition
- URL: http://arxiv.org/abs/2011.14585v1
- Date: Mon, 30 Nov 2020 07:11:56 GMT
- Title: Just One Moment: Inconspicuous One Frame Attack on Deep Action
Recognition
- Authors: Jaehui Hwang, Jun-Hyuk Kim, Jun-Ho Choi, and Jong-Seok Lee
- Abstract summary: We study the vulnerability of deep learning-based action recognition methods against the adversarial attack.
We present a new one frame attack that adds an inconspicuous perturbation to only a single frame of a given video clip.
Our method shows high fooling rates and produces hardly perceivable perturbation to human observers.
- Score: 34.925573731184514
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The video-based action recognition task has been extensively studied in
recent years. In this paper, we study the vulnerability of deep learning-based
action recognition methods against the adversarial attack using a new one frame
attack that adds an inconspicuous perturbation to only a single frame of a
given video clip. We investigate the effectiveness of our one frame attack on
state-of-the-art action recognition models, along with thorough analysis of the
vulnerability in terms of their model structure and perceivability of the
perturbation. Our method shows high fooling rates and produces hardly
perceivable perturbation to human observers, which is evaluated by a subjective
test. In addition, we present a video-agnostic approach that finds a universal
perturbation.
Related papers
- Temporal-Distributed Backdoor Attack Against Video Based Action
Recognition [21.916002204426853]
We introduce a simple yet effective backdoor attack against video data.
Our proposed attack, adding perturbations in a transformed domain, plants an imperceptible, temporally distributed trigger across the video frames.
arXiv Detail & Related papers (2023-08-21T22:31:54Z) - Temporal Shuffling for Defending Deep Action Recognition Models against
Adversarial Attacks [67.58887471137436]
We develop a novel defense method using temporal shuffling of input videos against adversarial attacks for action recognition models.
To the best of our knowledge, this is the first attempt to design a defense method without additional training for 3D CNN-based video action recognition models.
arXiv Detail & Related papers (2021-12-15T06:57:01Z) - Towards A Conceptually Simple Defensive Approach for Few-shot
classifiers Against Adversarial Support Samples [107.38834819682315]
We study a conceptually simple approach to defend few-shot classifiers against adversarial attacks.
We propose a simple attack-agnostic detection method, using the concept of self-similarity and filtering.
Our evaluation on the miniImagenet (MI) and CUB datasets exhibit good attack detection performance.
arXiv Detail & Related papers (2021-10-24T05:46:03Z) - Understanding the Robustness of Skeleton-based Action Recognition under
Adversarial Attack [29.850716475485715]
We propose a new method to attack action recognizers that rely on 3D skeletal motion.
Our method involves an innovative perceptual loss that ensures the imperceptibility of the attack.
Our method shows that adversarial attack on 3D skeletal motions, one type of time-series data, is significantly different from traditional adversarial attack problems.
arXiv Detail & Related papers (2021-03-09T10:53:58Z) - Adversarial Examples Detection beyond Image Space [88.7651422751216]
We find that there exists compliance between perturbations and prediction confidence, which guides us to detect few-perturbation attacks from the aspect of prediction confidence.
We propose a method beyond image space by a two-stream architecture, in which the image stream focuses on the pixel artifacts and the gradient stream copes with the confidence artifacts.
arXiv Detail & Related papers (2021-02-23T09:55:03Z) - Detection Defense Against Adversarial Attacks with Saliency Map [7.736844355705379]
It is well established that neural networks are vulnerable to adversarial examples, which are almost imperceptible on human vision.
Existing defenses are trend to harden the robustness of models against adversarial attacks.
We propose a novel method combined with additional noises and utilize the inconsistency strategy to detect adversarial examples.
arXiv Detail & Related papers (2020-09-06T13:57:17Z) - Uncertainty-Aware Weakly Supervised Action Detection from Untrimmed
Videos [82.02074241700728]
In this paper, we present a prohibitive-level action recognition model that is trained with only video-frame labels.
Our method per person detectors have been trained on large image datasets within Multiple Instance Learning framework.
We show how we can apply our method in cases where the standard Multiple Instance Learning assumption, that each bag contains at least one instance with the specified label, is invalid.
arXiv Detail & Related papers (2020-07-21T10:45:05Z) - Towards Understanding the Adversarial Vulnerability of Skeleton-based
Action Recognition [133.35968094967626]
Skeleton-based action recognition has attracted increasing attention due to its strong adaptability to dynamic circumstances.
With the help of deep learning techniques, it has also witnessed substantial progress and currently achieved around 90% accuracy in benign environment.
Research on the vulnerability of skeleton-based action recognition under different adversarial settings remains scant.
arXiv Detail & Related papers (2020-05-14T17:12:52Z) - Temporal Sparse Adversarial Attack on Sequence-based Gait Recognition [56.844587127848854]
We demonstrate that the state-of-the-art gait recognition model is vulnerable to such attacks.
We employ a generative adversarial network based architecture to semantically generate adversarial high-quality gait silhouettes or video frames.
The experimental results show that if only one-fortieth of the frames are attacked, the accuracy of the target model drops dramatically.
arXiv Detail & Related papers (2020-02-22T10:08:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.