Improving the Transferability of Adversarial Attacks on Face Recognition
with Beneficial Perturbation Feature Augmentation
- URL: http://arxiv.org/abs/2210.16117v4
- Date: Wed, 19 Jul 2023 07:34:37 GMT
- Title: Improving the Transferability of Adversarial Attacks on Face Recognition
with Beneficial Perturbation Feature Augmentation
- Authors: Fengfan Zhou, Hefei Ling, Yuxuan Shi, Jiazhong Chen, Zongyi Li, Ping
Li
- Abstract summary: Face recognition (FR) models can be easily fooled by adversarial examples, which are crafted by adding imperceptible perturbations on benign face images.
In this paper, we improve the transferability of adversarial face examples to expose more blind spots of existing FR models.
We propose a novel attack method called Beneficial Perturbation Feature Augmentation Attack (BPFA)
- Score: 26.032639566914114
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Face recognition (FR) models can be easily fooled by adversarial examples,
which are crafted by adding imperceptible perturbations on benign face images.
The existence of adversarial face examples poses a great threat to the security
of society. In order to build a more sustainable digital nation, in this paper,
we improve the transferability of adversarial face examples to expose more
blind spots of existing FR models. Though generating hard samples has shown its
effectiveness in improving the generalization of models in training tasks, the
effectiveness of utilizing this idea to improve the transferability of
adversarial face examples remains unexplored. To this end, based on the
property of hard samples and the symmetry between training tasks and
adversarial attack tasks, we propose the concept of hard models, which have
similar effects as hard samples for adversarial attack tasks. Utilizing the
concept of hard models, we propose a novel attack method called Beneficial
Perturbation Feature Augmentation Attack (BPFA), which reduces the overfitting
of adversarial examples to surrogate FR models by constantly generating new
hard models to craft the adversarial examples. Specifically, in the
backpropagation, BPFA records the gradients on pre-selected feature maps and
uses the gradient on the input image to craft the adversarial example. In the
next forward propagation, BPFA leverages the recorded gradients to add
beneficial perturbations on their corresponding feature maps to increase the
loss. Extensive experiments demonstrate that BPFA can significantly boost the
transferability of adversarial attacks on FR.
Related papers
- Boosting the Targeted Transferability of Adversarial Examples via Salient Region & Weighted Feature Drop [2.176586063731861]
A prevalent approach for adversarial attacks relies on the transferability of adversarial examples.
A novel framework based on Salient region & Weighted Feature Drop (SWFD) designed to enhance the targeted transferability of adversarial examples.
arXiv Detail & Related papers (2024-11-11T08:23:37Z) - Imperceptible Face Forgery Attack via Adversarial Semantic Mask [59.23247545399068]
We propose an Adversarial Semantic Mask Attack framework (ASMA) which can generate adversarial examples with good transferability and invisibility.
Specifically, we propose a novel adversarial semantic mask generative model, which can constrain generated perturbations in local semantic regions for good stealthiness.
arXiv Detail & Related papers (2024-06-16T10:38:11Z) - Adversarial Attacks on Both Face Recognition and Face Anti-spoofing Models [47.72177312801278]
Adrial attacks on Face Recognition (FR) systems have proven highly effective in compromising pure FR models.
We propose a novel setting of adversarially attacking both FR and Face Anti-Spoofing (FAS) models simultaneously.
We introduce a new attack method, namely Style-aligned Distribution Biasing (SDB), to improve the capacity of black-box attacks on both FR and FAS models.
arXiv Detail & Related papers (2024-05-27T08:30:29Z) - Efficient Generation of Targeted and Transferable Adversarial Examples for Vision-Language Models Via Diffusion Models [17.958154849014576]
Adversarial attacks can be used to assess the robustness of large visual-language models (VLMs)
Previous transfer-based adversarial attacks incur high costs due to high iteration counts and complex method structure.
We propose AdvDiffVLM, which uses diffusion models to generate natural, unrestricted and targeted adversarial examples.
arXiv Detail & Related papers (2024-04-16T07:19:52Z) - Adv-Diffusion: Imperceptible Adversarial Face Identity Attack via Latent
Diffusion Model [61.53213964333474]
We propose a unified framework Adv-Diffusion that can generate imperceptible adversarial identity perturbations in the latent space but not the raw pixel space.
Specifically, we propose the identity-sensitive conditioned diffusion generative model to generate semantic perturbations in the surroundings.
The designed adaptive strength-based adversarial perturbation algorithm can ensure both attack transferability and stealthiness.
arXiv Detail & Related papers (2023-12-18T15:25:23Z) - SA-Attack: Improving Adversarial Transferability of Vision-Language
Pre-training Models via Self-Augmentation [56.622250514119294]
In contrast to white-box adversarial attacks, transfer attacks are more reflective of real-world scenarios.
We propose a self-augment-based transfer attack method, termed SA-Attack.
arXiv Detail & Related papers (2023-12-08T09:08:50Z) - LFAA: Crafting Transferable Targeted Adversarial Examples with
Low-Frequency Perturbations [25.929492841042666]
We present a novel approach to generate transferable targeted adversarial examples.
We exploit the vulnerability of deep neural networks to perturbations on high-frequency components of images.
Our proposed approach significantly outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-10-31T04:54:55Z) - Generating Adversarial Examples with Better Transferability via Masking
Unimportant Parameters of Surrogate Model [6.737574282249396]
We propose to improve the transferability of adversarial examples in the transfer-based attack via unimportant masking parameters (MUP)
The key idea in MUP is to refine the pretrained surrogate models to boost the transfer-based attack.
arXiv Detail & Related papers (2023-04-14T03:06:43Z) - Harnessing Perceptual Adversarial Patches for Crowd Counting [92.79051296850405]
Crowd counting is vulnerable to adversarial examples in the physical world.
This paper proposes the Perceptual Adrial Patch (PAP) generation framework to learn the shared perceptual features between models.
arXiv Detail & Related papers (2021-09-16T13:51:39Z) - Towards Transferable Adversarial Attack against Deep Face Recognition [58.07786010689529]
Deep convolutional neural networks (DCNNs) have been found to be vulnerable to adversarial examples.
transferable adversarial examples can severely hinder the robustness of DCNNs.
We propose DFANet, a dropout-based method used in convolutional layers, which can increase the diversity of surrogate models.
We generate a new set of adversarial face pairs that can successfully attack four commercial APIs without any queries.
arXiv Detail & Related papers (2020-04-13T06:44:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.