Hard No-Box Adversarial Attack on Skeleton-Based Human Action
Recognition with Skeleton-Motion-Informed Gradient
- URL: http://arxiv.org/abs/2308.05681v2
- Date: Fri, 18 Aug 2023 15:34:40 GMT
- Title: Hard No-Box Adversarial Attack on Skeleton-Based Human Action
Recognition with Skeleton-Motion-Informed Gradient
- Authors: Zhengzhi Lu, He Wang, Ziyi Chang, Guoan Yang and Hubert P. H. Shum
- Abstract summary: Methods for skeleton-based human activity recognition have been shown to be vulnerable to adversarial attacks.
In this paper, we consider a new attack task: the attacker has no access to the victim model or the training data or labels.
Specifically, we define an adversarial loss to compute a new gradient for the attack, named skeleton-motion-informed (SMI) gradient.
- Score: 14.392853911242923
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, methods for skeleton-based human activity recognition have been
shown to be vulnerable to adversarial attacks. However, these attack methods
require either the full knowledge of the victim (i.e. white-box attacks),
access to training data (i.e. transfer-based attacks) or frequent model queries
(i.e. black-box attacks). All their requirements are highly restrictive,
raising the question of how detrimental the vulnerability is. In this paper, we
show that the vulnerability indeed exists. To this end, we consider a new
attack task: the attacker has no access to the victim model or the training
data or labels, where we coin the term hard no-box attack. Specifically, we
first learn a motion manifold where we define an adversarial loss to compute a
new gradient for the attack, named skeleton-motion-informed (SMI) gradient. Our
gradient contains information of the motion dynamics, which is different from
existing gradient-based attack methods that compute the loss gradient assuming
each dimension in the data is independent. The SMI gradient can augment many
gradient-based attack methods, leading to a new family of no-box attack
methods. Extensive evaluation and comparison show that our method imposes a
real threat to existing classifiers. They also show that the SMI gradient
improves the transferability and imperceptibility of adversarial samples in
both no-box and transfer-based black-box settings.
Related papers
- Understanding the Vulnerability of Skeleton-based Human Activity Recognition via Black-box Attack [53.032801921915436]
Human Activity Recognition (HAR) has been employed in a wide range of applications, e.g. self-driving cars.
Recently, the robustness of skeleton-based HAR methods have been questioned due to their vulnerability to adversarial attacks.
We show such threats exist, even when the attacker only has access to the input/output of the model.
We propose the very first black-box adversarial attack approach in skeleton-based HAR called BASAR.
arXiv Detail & Related papers (2022-11-21T09:51:28Z) - Adversarially Robust Classification by Conditional Generative Model
Inversion [4.913248451323163]
We propose a classification model that does not obfuscate gradients and is robust by construction without assuming prior knowledge about the attack.
Our method casts classification as an optimization problem where we "invert" a conditional generator trained on unperturbed, natural images.
We demonstrate that our model is extremely robust against black-box attacks and has improved robustness against white-box attacks.
arXiv Detail & Related papers (2022-01-12T23:11:16Z) - Adversarial Transfer Attacks With Unknown Data and Class Overlap [19.901933940805684]
Current transfer attack research has an unrealistic advantage for the attacker.
We present the first study of transferring adversarial attacks focusing on the data available to attacker and victim under imperfect settings.
This threat model is relevant to applications in medicine, malware, and others.
arXiv Detail & Related papers (2021-09-23T03:41:34Z) - Meta Gradient Adversarial Attack [64.5070788261061]
This paper proposes a novel architecture called Metaversa Gradient Adrial Attack (MGAA), which is plug-and-play and can be integrated with any existing gradient-based attack method.
Specifically, we randomly sample multiple models from a model zoo to compose different tasks and iteratively simulate a white-box attack and a black-box attack in each task.
By narrowing the gap between the gradient directions in white-box and black-box attacks, the transferability of adversarial examples on the black-box setting can be improved.
arXiv Detail & Related papers (2021-08-09T17:44:19Z) - Improving the Transferability of Adversarial Examples with New Iteration
Framework and Input Dropout [8.24029748310858]
We propose a new gradient iteration framework, which redefines the relationship between the iteration step size, the number of perturbations, and the maximum iterations.
Under this framework, we easily improve the attack success rate of DI-TI-MIM.
In addition, we propose a gradient iterative attack method based on input dropout, which can be well combined with our framework.
arXiv Detail & Related papers (2021-06-03T06:36:38Z) - Staircase Sign Method for Boosting Adversarial Attacks [123.19227129979943]
Crafting adversarial examples for the transfer-based attack is challenging and remains a research hot spot.
We propose a novel Staircase Sign Method (S$2$M) to alleviate this issue, thus boosting transfer-based attacks.
Our method can be generally integrated into any transfer-based attacks, and the computational overhead is negligible.
arXiv Detail & Related papers (2021-04-20T02:31:55Z) - Hidden Backdoor Attack against Semantic Segmentation Models [60.0327238844584]
The emphbackdoor attack intends to embed hidden backdoors in deep neural networks (DNNs) by poisoning training data.
We propose a novel attack paradigm, the emphfine-grained attack, where we treat the target label from the object-level instead of the image-level.
Experiments show that the proposed methods can successfully attack semantic segmentation models by poisoning only a small proportion of training data.
arXiv Detail & Related papers (2021-03-06T05:50:29Z) - Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching [56.280018325419896]
Data Poisoning attacks modify training data to maliciously control a model trained on such data.
We analyze a particularly malicious poisoning attack that is both "from scratch" and "clean label"
We show that it is the first poisoning method to cause targeted misclassification in modern deep networks trained from scratch on a full-sized, poisoned ImageNet dataset.
arXiv Detail & Related papers (2020-09-04T16:17:54Z) - Spanning Attack: Reinforce Black-box Attacks with Unlabeled Data [96.92837098305898]
Black-box attacks aim to craft adversarial perturbations by querying input-output pairs of machine learning models.
Black-box attacks often suffer from the issue of query inefficiency due to the high dimensionality of the input space.
We propose a novel technique called the spanning attack, which constrains adversarial perturbations in a low-dimensional subspace via spanning an auxiliary unlabeled dataset.
arXiv Detail & Related papers (2020-05-11T05:57:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.