Towards Understanding the Adversarial Vulnerability of Skeleton-based
Action Recognition
- URL: http://arxiv.org/abs/2005.07151v2
- Date: Sat, 6 Jun 2020 15:21:59 GMT
- Title: Towards Understanding the Adversarial Vulnerability of Skeleton-based
Action Recognition
- Authors: Tianhang Zheng, Sheng Liu, Changyou Chen, Junsong Yuan, Baochun Li,
Kui Ren
- Abstract summary: Skeleton-based action recognition has attracted increasing attention due to its strong adaptability to dynamic circumstances.
With the help of deep learning techniques, it has also witnessed substantial progress and currently achieved around 90% accuracy in benign environment.
Research on the vulnerability of skeleton-based action recognition under different adversarial settings remains scant.
- Score: 133.35968094967626
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Skeleton-based action recognition has attracted increasing attention due to
its strong adaptability to dynamic circumstances and potential for broad
applications such as autonomous and anonymous surveillance. With the help of
deep learning techniques, it has also witnessed substantial progress and
currently achieved around 90\% accuracy in benign environment. On the other
hand, research on the vulnerability of skeleton-based action recognition under
different adversarial settings remains scant, which may raise security concerns
about deploying such techniques into real-world systems. However, filling this
research gap is challenging due to the unique physical constraints of skeletons
and human actions. In this paper, we attempt to conduct a thorough study
towards understanding the adversarial vulnerability of skeleton-based action
recognition. We first formulate generation of adversarial skeleton actions as a
constrained optimization problem by representing or approximating the
physiological and physical constraints with mathematical formulations. Since
the primal optimization problem with equality constraints is intractable, we
propose to solve it by optimizing its unconstrained dual problem using ADMM. We
then specify an efficient plug-in defense, inspired by recent theories and
empirical observations, against the adversarial skeleton actions. Extensive
evaluations demonstrate the effectiveness of the attack and defense method
under different settings.
Related papers
- ExAL: An Exploration Enhanced Adversarial Learning Algorithm [0.0]
We propose a novel Exploration-enhanced Adversarial Learning Algorithm (ExAL)
ExAL integrates exploration-driven mechanisms to discover perturbations that maximize impact on the model's decision boundary.
We evaluate the performance of ExAL on the MNIST Handwritten Digits and Blended Malware datasets.
arXiv Detail & Related papers (2024-11-24T15:37:29Z) - Emotion Loss Attacking: Adversarial Attack Perception for Skeleton based on Multi-dimensional Features [6.241047489413293]
We propose a novel adversarial attack method to attack action recognizers for skeletal motions.
Our method systematically proposes a dynamic distance function to measure the difference between skeletal motions.
We are the first to prove the effectiveness of emotional features, and provide a new idea for measuring the distance between skeletal motions.
arXiv Detail & Related papers (2024-06-28T10:45:37Z) - Exploring the Adversarial Frontier: Quantifying Robustness via Adversarial Hypervolume [17.198794644483026]
We propose a new metric termed adversarial hypervolume, assessing the robustness of deep learning models comprehensively over a range of perturbation intensities.
We adopt a novel training algorithm that enhances adversarial robustness uniformly across various perturbation intensities.
This research contributes a new measure of robustness and establishes a standard for assessing benchmarking and the resilience of current and future defensive models against adversarial threats.
arXiv Detail & Related papers (2024-03-08T07:03:18Z) - Towards Robust Semantic Segmentation against Patch-based Attack via Attention Refinement [68.31147013783387]
We observe that the attention mechanism is vulnerable to patch-based adversarial attacks.
In this paper, we propose a Robust Attention Mechanism (RAM) to improve the robustness of the semantic segmentation model.
arXiv Detail & Related papers (2024-01-03T13:58:35Z) - Mitigating Adversarial Vulnerability through Causal Parameter Estimation
by Adversarial Double Machine Learning [33.18197518590706]
Adversarial examples derived from deliberately crafted perturbations on visual inputs can easily harm decision process of deep neural networks.
We introduce a causal approach called Adversarial Double Machine Learning (ADML) which allows us to quantify the degree of adversarial vulnerability for network predictions.
ADML can directly estimate causal parameter of adversarial perturbations per se and mitigate negative effects that can potentially damage robustness.
arXiv Detail & Related papers (2023-07-14T09:51:26Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z) - Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial
Robustness [53.094682754683255]
We propose a Model-Agnostic Meta-Attack (MAMA) approach to discover stronger attack algorithms automatically.
Our method learns the in adversarial attacks parameterized by a recurrent neural network.
We develop a model-agnostic training algorithm to improve the ability of the learned when attacking unseen defenses.
arXiv Detail & Related papers (2021-10-13T13:54:24Z) - A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack
and Learning [122.49765136434353]
We present an effective method, called Hamiltonian Monte Carlo with Accumulated Momentum (HMCAM), aiming to generate a sequence of adversarial examples.
We also propose a new generative method called Contrastive Adversarial Training (CAT), which approaches equilibrium distribution of adversarial examples.
Both quantitative and qualitative analysis on several natural image datasets and practical systems have confirmed the superiority of the proposed algorithm.
arXiv Detail & Related papers (2020-10-15T16:07:26Z) - Adversarial vs behavioural-based defensive AI with joint, continual and
active learning: automated evaluation of robustness to deception, poisoning
and concept drift [62.997667081978825]
Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security.
In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise.
arXiv Detail & Related papers (2020-01-13T13:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.