Temporal Sparse Adversarial Attack on Sequence-based Gait Recognition
- URL: http://arxiv.org/abs/2002.09674v3
- Date: Tue, 10 Aug 2021 11:48:25 GMT
- Title: Temporal Sparse Adversarial Attack on Sequence-based Gait Recognition
- Authors: Ziwen He, Wei Wang, Jing Dong and Tieniu Tan
- Abstract summary: We demonstrate that the state-of-the-art gait recognition model is vulnerable to such attacks.
We employ a generative adversarial network based architecture to semantically generate adversarial high-quality gait silhouettes or video frames.
The experimental results show that if only one-fortieth of the frames are attacked, the accuracy of the target model drops dramatically.
- Score: 56.844587127848854
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Gait recognition is widely used in social security applications due to its
advantages in long-distance human identification. Recently, sequence-based
methods have achieved high accuracy by learning abundant temporal and spatial
information. However, their robustness under adversarial attacks has not been
clearly explored. In this paper, we demonstrate that the state-of-the-art gait
recognition model is vulnerable to such attacks. To this end, we propose a
novel temporal sparse adversarial attack method. Different from previous
additive noise models which add perturbations on original samples, we employ a
generative adversarial network based architecture to semantically generate
adversarial high-quality gait silhouettes or video frames. Moreover, by
sparsely substituting or inserting a few adversarial gait silhouettes, the
proposed method ensures its imperceptibility and achieves a high attack success
rate. The experimental results show that if only one-fortieth of the frames are
attacked, the accuracy of the target model drops dramatically.
Related papers
- Time-Aware Face Anti-Spoofing with Rotation Invariant Local Binary Patterns and Deep Learning [50.79277723970418]
imitation attacks can lead to erroneous identification and subsequent authentication of attackers.
Similar to face recognition, imitation attacks can also be detected with Machine Learning.
We propose a novel approach that promises high classification accuracy by combining previously unused features with time-aware deep learning strategies.
arXiv Detail & Related papers (2024-08-27T07:26:10Z) - Imperceptible Face Forgery Attack via Adversarial Semantic Mask [59.23247545399068]
We propose an Adversarial Semantic Mask Attack framework (ASMA) which can generate adversarial examples with good transferability and invisibility.
Specifically, we propose a novel adversarial semantic mask generative model, which can constrain generated perturbations in local semantic regions for good stealthiness.
arXiv Detail & Related papers (2024-06-16T10:38:11Z) - Detecting Adversarial Data via Perturbation Forgery [28.637963515748456]
adversarial detection aims to identify and filter out adversarial data from the data flow based on discrepancies in distribution and noise patterns between natural and adversarial data.
New attacks based on generative models with imbalanced and anisotropic noise patterns evade detection.
We propose Perturbation Forgery, which includes noise distribution perturbation, sparse mask generation, and pseudo-adversarial data production, to train an adversarial detector capable of detecting unseen gradient-based, generative-model-based, and physical adversarial attacks.
arXiv Detail & Related papers (2024-05-25T13:34:16Z) - Meta Invariance Defense Towards Generalizable Robustness to Unknown Adversarial Attacks [62.036798488144306]
Current defense mainly focuses on the known attacks, but the adversarial robustness to the unknown attacks is seriously overlooked.
We propose an attack-agnostic defense method named Meta Invariance Defense (MID)
We show that MID simultaneously achieves robustness to the imperceptible adversarial perturbations in high-level image classification and attack-suppression in low-level robust image regeneration.
arXiv Detail & Related papers (2024-04-04T10:10:38Z) - Robust Adversarial Attacks Detection for Deep Learning based Relative
Pose Estimation for Space Rendezvous [8.191688622709444]
We propose a novel approach for adversarial attack detection for deep neural network-based relative pose estimation schemes.
The proposed adversarial attack detector achieves a detection accuracy of 99.21%.
arXiv Detail & Related papers (2023-11-10T11:07:31Z) - LFAA: Crafting Transferable Targeted Adversarial Examples with
Low-Frequency Perturbations [25.929492841042666]
We present a novel approach to generate transferable targeted adversarial examples.
We exploit the vulnerability of deep neural networks to perturbations on high-frequency components of images.
Our proposed approach significantly outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-10-31T04:54:55Z) - Understanding the Vulnerability of Skeleton-based Human Activity Recognition via Black-box Attack [53.032801921915436]
Human Activity Recognition (HAR) has been employed in a wide range of applications, e.g. self-driving cars.
Recently, the robustness of skeleton-based HAR methods have been questioned due to their vulnerability to adversarial attacks.
We show such threats exist, even when the attacker only has access to the input/output of the model.
We propose the very first black-box adversarial attack approach in skeleton-based HAR called BASAR.
arXiv Detail & Related papers (2022-11-21T09:51:28Z) - Improving Adversarial Robustness to Sensitivity and Invariance Attacks
with Deep Metric Learning [80.21709045433096]
A standard method in adversarial robustness assumes a framework to defend against samples crafted by minimally perturbing a sample.
We use metric learning to frame adversarial regularization as an optimal transport problem.
Our preliminary results indicate that regularizing over invariant perturbations in our framework improves both invariant and sensitivity defense.
arXiv Detail & Related papers (2022-11-04T13:54:02Z) - Adversarial Bone Length Attack on Action Recognition [4.9631159466100305]
We show that adversarial attacks can be performed on skeleton-based action recognition models.
Specifically, we restrict the perturbations to the lengths of the skeleton's bones, which allows an adversary to manipulate only approximately 30 effective dimensions.
arXiv Detail & Related papers (2021-09-13T09:59:44Z) - Detection Defense Against Adversarial Attacks with Saliency Map [7.736844355705379]
It is well established that neural networks are vulnerable to adversarial examples, which are almost imperceptible on human vision.
Existing defenses are trend to harden the robustness of models against adversarial attacks.
We propose a novel method combined with additional noises and utilize the inconsistency strategy to detect adversarial examples.
arXiv Detail & Related papers (2020-09-06T13:57:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.