Discrete Point-wise Attack Is Not Enough: Generalized Manifold
Adversarial Attack for Face Recognition
- URL: http://arxiv.org/abs/2301.06083v2
- Date: Sat, 8 Apr 2023 02:47:42 GMT
- Title: Discrete Point-wise Attack Is Not Enough: Generalized Manifold
Adversarial Attack for Face Recognition
- Authors: Qian Li, Yuxiao Hu, Ye Liu, Dongxiao Zhang, Xin Jin, Yuntian Chen
- Abstract summary: We introduce a new pipeline of Generalized Manifold Adversarial Attack (GMAA) to achieve a better attack performance.
GMAA expands the target to be attacked from one to many to encourage a good generalization ability for the generated adversarial examples.
We demonstrate the effectiveness of our method based on extensive experiments, and reveal that GMAA promises a semantic continuous adversarial space with a higher generalization ability and visual quality.
- Score: 10.03652348636603
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Classical adversarial attacks for Face Recognition (FR) models typically
generate discrete examples for target identity with a single state image.
However, such paradigm of point-wise attack exhibits poor generalization
against numerous unknown states of identity and can be easily defended. In this
paper, by rethinking the inherent relationship between the face of target
identity and its variants, we introduce a new pipeline of Generalized Manifold
Adversarial Attack (GMAA) to achieve a better attack performance by expanding
the attack range. Specifically, this expansion lies on two aspects - GMAA not
only expands the target to be attacked from one to many to encourage a good
generalization ability for the generated adversarial examples, but it also
expands the latter from discrete points to manifold by leveraging the domain
knowledge that face expression change can be continuous, which enhances the
attack effect as a data augmentation mechanism did. Moreover, we further design
a dual supervision with local and global constraints as a minor contribution to
improve the visual quality of the generated adversarial examples. We
demonstrate the effectiveness of our method based on extensive experiments, and
reveal that GMAA promises a semantic continuous adversarial space with a higher
generalization ability and visual quality
Related papers
- Imperceptible Face Forgery Attack via Adversarial Semantic Mask [59.23247545399068]
We propose an Adversarial Semantic Mask Attack framework (ASMA) which can generate adversarial examples with good transferability and invisibility.
Specifically, we propose a novel adversarial semantic mask generative model, which can constrain generated perturbations in local semantic regions for good stealthiness.
arXiv Detail & Related papers (2024-06-16T10:38:11Z) - Efficient Generation of Targeted and Transferable Adversarial Examples for Vision-Language Models Via Diffusion Models [17.958154849014576]
Adversarial attacks can be used to assess the robustness of large visual-language models (VLMs)
Previous transfer-based adversarial attacks incur high costs due to high iteration counts and complex method structure.
We propose AdvDiffVLM, which uses diffusion models to generate natural, unrestricted and targeted adversarial examples.
arXiv Detail & Related papers (2024-04-16T07:19:52Z) - Meta Invariance Defense Towards Generalizable Robustness to Unknown Adversarial Attacks [62.036798488144306]
Current defense mainly focuses on the known attacks, but the adversarial robustness to the unknown attacks is seriously overlooked.
We propose an attack-agnostic defense method named Meta Invariance Defense (MID)
We show that MID simultaneously achieves robustness to the imperceptible adversarial perturbations in high-level image classification and attack-suppression in low-level robust image regeneration.
arXiv Detail & Related papers (2024-04-04T10:10:38Z) - Rethinking Impersonation and Dodging Attacks on Face Recognition Systems [38.37530847215405]
Face Recognition (FR) systems can be easily deceived by adversarial examples that manipulate benign face images through imperceptible perturbations.
Previous methods often achieve a successful impersonation attack on FR, however, it does not necessarily guarantee a successful dodging attack on FR in the black-box setting.
We propose a novel attack method termed as Adversarial Pruning (Adv-Pruning) to fine-tune existing adversarial examples to enhance their dodging capabilities while preserving their impersonation capabilities.
arXiv Detail & Related papers (2024-01-17T01:10:17Z) - Guidance Through Surrogate: Towards a Generic Diagnostic Attack [101.36906370355435]
We develop a guided mechanism to avoid local minima during attack optimization, leading to a novel attack dubbed Guided Projected Gradient Attack (G-PGA)
Our modified attack does not require random restarts, large number of attack iterations or search for an optimal step-size.
More than an effective attack, G-PGA can be used as a diagnostic tool to reveal elusive robustness due to gradient masking in adversarial defenses.
arXiv Detail & Related papers (2022-12-30T18:45:23Z) - Adv-Attribute: Inconspicuous and Transferable Adversarial Attack on Face
Recognition [111.1952945740271]
Adversarial Attributes (Adv-Attribute) is designed to generate inconspicuous and transferable attacks on face recognition.
Experiments on the FFHQ and CelebA-HQ datasets show that the proposed Adv-Attribute method achieves the state-of-the-art attacking success rates.
arXiv Detail & Related papers (2022-10-13T09:56:36Z) - CARBEN: Composite Adversarial Robustness Benchmark [70.05004034081377]
This paper demonstrates how composite adversarial attack (CAA) affects the resulting image.
It provides real-time inferences of different models, which will facilitate users' configuration of the parameters of the attack level.
A leaderboard to benchmark adversarial robustness against CAA is also introduced.
arXiv Detail & Related papers (2022-07-16T01:08:44Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.