Adversarial Bone Length Attack on Action Recognition
- URL: http://arxiv.org/abs/2109.05830v1
- Date: Mon, 13 Sep 2021 09:59:44 GMT
- Title: Adversarial Bone Length Attack on Action Recognition
- Authors: Nariki Tanaka, Hiroshi Kera, Kazuhiko Kawamoto
- Abstract summary: We show that adversarial attacks can be performed on skeleton-based action recognition models.
Specifically, we restrict the perturbations to the lengths of the skeleton's bones, which allows an adversary to manipulate only approximately 30 effective dimensions.
- Score: 4.9631159466100305
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Skeleton-based action recognition models have recently been shown to be
vulnerable to adversarial attacks. Compared to adversarial attacks on images,
perturbations to skeletons are typically bounded to a lower dimension of
approximately 100 per frame. This lower-dimensional setting makes it more
difficult to generate imperceptible perturbations. Existing attacks resolve
this by exploiting the temporal structure of the skeleton motion so that the
perturbation dimension increases to thousands. In this paper, we show that
adversarial attacks can be performed on skeleton-based action recognition
models, even in a significantly low-dimensional setting without any temporal
manipulation. Specifically, we restrict the perturbations to the lengths of the
skeleton's bones, which allows an adversary to manipulate only approximately 30
effective dimensions. We conducted experiments on the NTU RGB+D and HDM05
datasets and demonstrate that the proposed attack successfully deceived models
with sometimes greater than 90\% success rate by small perturbations.
Furthermore, we discovered an interesting phenomenon: in our low-dimensional
setting, the adversarial training with the bone length attack shares a similar
property with data augmentation, and it not only improves the adversarial
robustness but also improves the classification accuracy on the original
original data. This is an interesting counterexample of the trade-off between
adversarial robustness and clean accuracy, which has been widely observed in
studies on adversarial training in the high-dimensional regime.
Related papers
- Hide in Thicket: Generating Imperceptible and Rational Adversarial
Perturbations on 3D Point Clouds [62.94859179323329]
Adrial attack methods based on point manipulation for 3D point cloud classification have revealed the fragility of 3D models.
We propose a novel shape-based adversarial attack method, HiT-ADV, which conducts a two-stage search for attack regions based on saliency and imperceptibility perturbation scores.
We propose that by employing benign resampling and benign rigid transformations, we can further enhance physical adversarial strength with little sacrifice to imperceptibility.
arXiv Detail & Related papers (2024-03-08T12:08:06Z) - Understanding the Vulnerability of Skeleton-based Human Activity Recognition via Black-box Attack [53.032801921915436]
Human Activity Recognition (HAR) has been employed in a wide range of applications, e.g. self-driving cars.
Recently, the robustness of skeleton-based HAR methods have been questioned due to their vulnerability to adversarial attacks.
We show such threats exist, even when the attacker only has access to the input/output of the model.
We propose the very first black-box adversarial attack approach in skeleton-based HAR called BASAR.
arXiv Detail & Related papers (2022-11-21T09:51:28Z) - A Hierarchical Assessment of Adversarial Severity [3.0478504236139528]
We study the effects of adversarial noise by measuring the Robustness and Severity into a large-scale dataset: iNaturalist-H.
We enhance the traditional adversarial training with a simple yet effective Hierarchical Curriculum Training to learn these nodes gradually within the hierarchical tree.
We perform extensive experiments showing that hierarchical defenses allow deep models to boost the adversarial Robustness by 1.85% and reduce the severity of all attacks by 0.17, on average.
arXiv Detail & Related papers (2021-08-26T13:29:17Z) - Towards Adversarial Patch Analysis and Certified Defense against Crowd
Counting [61.99564267735242]
Crowd counting has drawn much attention due to its importance in safety-critical surveillance systems.
Recent studies have demonstrated that deep neural network (DNN) methods are vulnerable to adversarial attacks.
We propose a robust attack strategy called Adversarial Patch Attack with Momentum to evaluate the robustness of crowd counting models.
arXiv Detail & Related papers (2021-04-22T05:10:55Z) - Perception Improvement for Free: Exploring Imperceptible Black-box
Adversarial Attacks on Image Classification [27.23874129994179]
White-box adversarial attacks can fool neural networks with small perturbations, especially for large size images.
Keeping successful adversarial perturbations imperceptible is especially challenging for transfer-based black-box adversarial attacks.
We propose structure-aware adversarial attacks by generating adversarial images based on psychological perceptual models.
arXiv Detail & Related papers (2020-10-30T07:17:12Z) - Detection Defense Against Adversarial Attacks with Saliency Map [7.736844355705379]
It is well established that neural networks are vulnerable to adversarial examples, which are almost imperceptible on human vision.
Existing defenses are trend to harden the robustness of models against adversarial attacks.
We propose a novel method combined with additional noises and utilize the inconsistency strategy to detect adversarial examples.
arXiv Detail & Related papers (2020-09-06T13:57:17Z) - Spatiotemporal Attacks for Embodied Agents [119.43832001301041]
We take the first step to study adversarial attacks for embodied agents.
In particular, we generate adversarial examples, which exploit the interaction history in both the temporal and spatial dimensions.
Our perturbations have strong attack and generalization abilities.
arXiv Detail & Related papers (2020-05-19T01:38:47Z) - Towards Understanding the Adversarial Vulnerability of Skeleton-based
Action Recognition [133.35968094967626]
Skeleton-based action recognition has attracted increasing attention due to its strong adaptability to dynamic circumstances.
With the help of deep learning techniques, it has also witnessed substantial progress and currently achieved around 90% accuracy in benign environment.
Research on the vulnerability of skeleton-based action recognition under different adversarial settings remains scant.
arXiv Detail & Related papers (2020-05-14T17:12:52Z) - Temporal Sparse Adversarial Attack on Sequence-based Gait Recognition [56.844587127848854]
We demonstrate that the state-of-the-art gait recognition model is vulnerable to such attacks.
We employ a generative adversarial network based architecture to semantically generate adversarial high-quality gait silhouettes or video frames.
The experimental results show that if only one-fortieth of the frames are attacked, the accuracy of the target model drops dramatically.
arXiv Detail & Related papers (2020-02-22T10:08:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.