Understanding the Robustness of Skeleton-based Action Recognition under
Adversarial Attack
- URL: http://arxiv.org/abs/2103.05347v1
- Date: Tue, 9 Mar 2021 10:53:58 GMT
- Title: Understanding the Robustness of Skeleton-based Action Recognition under
Adversarial Attack
- Authors: He Wang, Feixiang He, Zhexi Peng, Yong-Liang Yang, Tianjia Shao, Kun
Zhou, David Hogg
- Abstract summary: We propose a new method to attack action recognizers that rely on 3D skeletal motion.
Our method involves an innovative perceptual loss that ensures the imperceptibility of the attack.
Our method shows that adversarial attack on 3D skeletal motions, one type of time-series data, is significantly different from traditional adversarial attack problems.
- Score: 29.850716475485715
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Action recognition has been heavily employed in many applications such as
autonomous vehicles, surveillance, etc, where its robustness is a primary
concern. In this paper, we examine the robustness of state-of-the-art action
recognizers against adversarial attack, which has been rarely investigated so
far. To this end, we propose a new method to attack action recognizers that
rely on 3D skeletal motion. Our method involves an innovative perceptual loss
that ensures the imperceptibility of the attack. Empirical studies demonstrate
that our method is effective in both white-box and black-box scenarios. Its
generalizability is evidenced on a variety of action recognizers and datasets.
Its versatility is shown in different attacking strategies. Its deceitfulness
is proven in extensive perceptual studies. Our method shows that adversarial
attack on 3D skeletal motions, one type of time-series data, is significantly
different from traditional adversarial attack problems. Its success raises
serious concern on the robustness of action recognizers and provides insights
on potential improvements.
Related papers
- Investigating Human-Identifiable Features Hidden in Adversarial
Perturbations [54.39726653562144]
Our study explores up to five attack algorithms across three datasets.
We identify human-identifiable features in adversarial perturbations.
Using pixel-level annotations, we extract such features and demonstrate their ability to compromise target models.
arXiv Detail & Related papers (2023-09-28T22:31:29Z) - Understanding the Vulnerability of Skeleton-based Human Activity Recognition via Black-box Attack [53.032801921915436]
Human Activity Recognition (HAR) has been employed in a wide range of applications, e.g. self-driving cars.
Recently, the robustness of skeleton-based HAR methods have been questioned due to their vulnerability to adversarial attacks.
We show such threats exist, even when the attacker only has access to the input/output of the model.
We propose the very first black-box adversarial attack approach in skeleton-based HAR called BASAR.
arXiv Detail & Related papers (2022-11-21T09:51:28Z) - RobustSense: Defending Adversarial Attack for Secure Device-Free Human
Activity Recognition [37.387265457439476]
We propose a novel learning framework, RobustSense, to defend common adversarial attacks.
Our method works well on wireless human activity recognition and person identification systems.
arXiv Detail & Related papers (2022-04-04T15:06:03Z) - Temporal Shuffling for Defending Deep Action Recognition Models against
Adversarial Attacks [67.58887471137436]
We develop a novel defense method using temporal shuffling of input videos against adversarial attacks for action recognition models.
To the best of our knowledge, this is the first attempt to design a defense method without additional training for 3D CNN-based video action recognition models.
arXiv Detail & Related papers (2021-12-15T06:57:01Z) - Adversarial Bone Length Attack on Action Recognition [4.9631159466100305]
We show that adversarial attacks can be performed on skeleton-based action recognition models.
Specifically, we restrict the perturbations to the lengths of the skeleton's bones, which allows an adversary to manipulate only approximately 30 effective dimensions.
arXiv Detail & Related papers (2021-09-13T09:59:44Z) - BASAR:Black-box Attack on Skeletal Action Recognition [32.88446909707521]
Skeleton-based activity recognizers are vulnerable to adversarial attacks when the full-knowledge of the recognizer is accessible to the attacker.
In this paper, we show that such threats do exist under black-box settings too.
Through BASAR, we show that adversarial attack is not only truly a threat but also can be extremely deceitful.
arXiv Detail & Related papers (2021-03-09T07:29:35Z) - Just One Moment: Inconspicuous One Frame Attack on Deep Action
Recognition [34.925573731184514]
We study the vulnerability of deep learning-based action recognition methods against the adversarial attack.
We present a new one frame attack that adds an inconspicuous perturbation to only a single frame of a given video clip.
Our method shows high fooling rates and produces hardly perceivable perturbation to human observers.
arXiv Detail & Related papers (2020-11-30T07:11:56Z) - Towards Understanding the Adversarial Vulnerability of Skeleton-based
Action Recognition [133.35968094967626]
Skeleton-based action recognition has attracted increasing attention due to its strong adaptability to dynamic circumstances.
With the help of deep learning techniques, it has also witnessed substantial progress and currently achieved around 90% accuracy in benign environment.
Research on the vulnerability of skeleton-based action recognition under different adversarial settings remains scant.
arXiv Detail & Related papers (2020-05-14T17:12:52Z) - Temporal Sparse Adversarial Attack on Sequence-based Gait Recognition [56.844587127848854]
We demonstrate that the state-of-the-art gait recognition model is vulnerable to such attacks.
We employ a generative adversarial network based architecture to semantically generate adversarial high-quality gait silhouettes or video frames.
The experimental results show that if only one-fortieth of the frames are attacked, the accuracy of the target model drops dramatically.
arXiv Detail & Related papers (2020-02-22T10:08:42Z) - Adversarial vs behavioural-based defensive AI with joint, continual and
active learning: automated evaluation of robustness to deception, poisoning
and concept drift [62.997667081978825]
Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security.
In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise.
arXiv Detail & Related papers (2020-01-13T13:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.