Adv-Attribute: Inconspicuous and Transferable Adversarial Attack on Face
Recognition
- URL: http://arxiv.org/abs/2210.06871v1
- Date: Thu, 13 Oct 2022 09:56:36 GMT
- Title: Adv-Attribute: Inconspicuous and Transferable Adversarial Attack on Face
Recognition
- Authors: Shuai Jia, Bangjie Yin, Taiping Yao, Shouhong Ding, Chunhua Shen,
Xiaokang Yang, Chao Ma
- Abstract summary: Adversarial Attributes (Adv-Attribute) is designed to generate inconspicuous and transferable attacks on face recognition.
Experiments on the FFHQ and CelebA-HQ datasets show that the proposed Adv-Attribute method achieves the state-of-the-art attacking success rates.
- Score: 111.1952945740271
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning models have shown their vulnerability when dealing with
adversarial attacks. Existing attacks almost perform on low-level instances,
such as pixels and super-pixels, and rarely exploit semantic clues. For face
recognition attacks, existing methods typically generate the l_p-norm
perturbations on pixels, however, resulting in low attack transferability and
high vulnerability to denoising defense models. In this work, instead of
performing perturbations on the low-level pixels, we propose to generate
attacks through perturbing on the high-level semantics to improve attack
transferability. Specifically, a unified flexible framework, Adversarial
Attributes (Adv-Attribute), is designed to generate inconspicuous and
transferable attacks on face recognition, which crafts the adversarial noise
and adds it into different attributes based on the guidance of the difference
in face recognition features from the target. Moreover, the importance-aware
attribute selection and the multi-objective optimization strategy are
introduced to further ensure the balance of stealthiness and attacking
strength. Extensive experiments on the FFHQ and CelebA-HQ datasets show that
the proposed Adv-Attribute method achieves the state-of-the-art attacking
success rates while maintaining better visual effects against recent attack
methods.
Related papers
- Imperceptible Face Forgery Attack via Adversarial Semantic Mask [59.23247545399068]
We propose an Adversarial Semantic Mask Attack framework (ASMA) which can generate adversarial examples with good transferability and invisibility.
Specifically, we propose a novel adversarial semantic mask generative model, which can constrain generated perturbations in local semantic regions for good stealthiness.
arXiv Detail & Related papers (2024-06-16T10:38:11Z) - Meta Invariance Defense Towards Generalizable Robustness to Unknown Adversarial Attacks [62.036798488144306]
Current defense mainly focuses on the known attacks, but the adversarial robustness to the unknown attacks is seriously overlooked.
We propose an attack-agnostic defense method named Meta Invariance Defense (MID)
We show that MID simultaneously achieves robustness to the imperceptible adversarial perturbations in high-level image classification and attack-suppression in low-level robust image regeneration.
arXiv Detail & Related papers (2024-04-04T10:10:38Z) - Improving the Robustness of Object Detection and Classification AI models against Adversarial Patch Attacks [2.963101656293054]
We analyze attack techniques and propose a robust defense approach.
We successfully reduce model confidence by over 20% using adversarial patch attacks that exploit object shape, texture and position.
Our inpainting defense approach significantly enhances model resilience, achieving high accuracy and reliable localization despite the adversarial attacks.
arXiv Detail & Related papers (2024-03-04T13:32:48Z) - Rethinking Impersonation and Dodging Attacks on Face Recognition Systems [38.37530847215405]
Face Recognition (FR) systems can be easily deceived by adversarial examples that manipulate benign face images through imperceptible perturbations.
Previous methods often achieve a successful impersonation attack on FR, however, it does not necessarily guarantee a successful dodging attack on FR in the black-box setting.
We propose a novel attack method termed as Adversarial Pruning (Adv-Pruning) to fine-tune existing adversarial examples to enhance their dodging capabilities while preserving their impersonation capabilities.
arXiv Detail & Related papers (2024-01-17T01:10:17Z) - Adv-Diffusion: Imperceptible Adversarial Face Identity Attack via Latent
Diffusion Model [61.53213964333474]
We propose a unified framework Adv-Diffusion that can generate imperceptible adversarial identity perturbations in the latent space but not the raw pixel space.
Specifically, we propose the identity-sensitive conditioned diffusion generative model to generate semantic perturbations in the surroundings.
The designed adaptive strength-based adversarial perturbation algorithm can ensure both attack transferability and stealthiness.
arXiv Detail & Related papers (2023-12-18T15:25:23Z) - LFAA: Crafting Transferable Targeted Adversarial Examples with
Low-Frequency Perturbations [25.929492841042666]
We present a novel approach to generate transferable targeted adversarial examples.
We exploit the vulnerability of deep neural networks to perturbations on high-frequency components of images.
Our proposed approach significantly outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-10-31T04:54:55Z) - Discrete Point-wise Attack Is Not Enough: Generalized Manifold
Adversarial Attack for Face Recognition [10.03652348636603]
We introduce a new pipeline of Generalized Manifold Adversarial Attack (GMAA) to achieve a better attack performance.
GMAA expands the target to be attacked from one to many to encourage a good generalization ability for the generated adversarial examples.
We demonstrate the effectiveness of our method based on extensive experiments, and reveal that GMAA promises a semantic continuous adversarial space with a higher generalization ability and visual quality.
arXiv Detail & Related papers (2022-12-19T02:57:55Z) - On Trace of PGD-Like Adversarial Attacks [77.75152218980605]
Adversarial attacks pose safety and security concerns for deep learning applications.
We construct Adrial Response Characteristics (ARC) features to reflect the model's gradient consistency.
Our method is intuitive, light-weighted, non-intrusive, and data-undemanding.
arXiv Detail & Related papers (2022-05-19T14:26:50Z) - Temporal Sparse Adversarial Attack on Sequence-based Gait Recognition [56.844587127848854]
We demonstrate that the state-of-the-art gait recognition model is vulnerable to such attacks.
We employ a generative adversarial network based architecture to semantically generate adversarial high-quality gait silhouettes or video frames.
The experimental results show that if only one-fortieth of the frames are attacked, the accuracy of the target model drops dramatically.
arXiv Detail & Related papers (2020-02-22T10:08:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.