Generating Transferable and Stealthy Adversarial Patch via
Attention-guided Adversarial Inpainting
- URL: http://arxiv.org/abs/2308.05320v2
- Date: Sun, 1 Oct 2023 09:14:51 GMT
- Title: Generating Transferable and Stealthy Adversarial Patch via
Attention-guided Adversarial Inpainting
- Authors: Yanjie Li, Mingxing Duan, Xuelong Dai, Bin Xiao
- Abstract summary: We propose an innovative two-stage adversarial patch attack called Adv-Inpainting.
In the first stage, we extract style features and identity features from the attacker and target faces, respectively.
The proposed layer can adaptively fuse identity and style embeddings by fully exploiting priority contextual information.
In the second stage, we design an Adversarial Patch Refinement Network (APR-Net) with a novel boundary variance loss.
- Score: 12.974292128917222
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Adversarial patch attacks can fool the face recognition (FR) models via small
patches. However, previous adversarial patch attacks often result in unnatural
patterns that are easily noticeable. Generating transferable and stealthy
adversarial patches that can efficiently deceive the black-box FR models while
having good camouflage is challenging because of the huge stylistic difference
between the source and target images. To generate transferable,
natural-looking, and stealthy adversarial patches, we propose an innovative
two-stage attack called Adv-Inpainting, which extracts style features and
identity features from the attacker and target faces, respectively and then
fills the patches with misleading and inconspicuous content guided by attention
maps. In the first stage, we extract multi-scale style embeddings by a
pyramid-like network and identity embeddings by a pretrained FR model and
propose a novel Attention-guided Adaptive Instance Normalization layer (AAIN)
to merge them via background-patch cross-attention maps. The proposed layer can
adaptively fuse identity and style embeddings by fully exploiting priority
contextual information. In the second stage, we design an Adversarial Patch
Refinement Network (APR-Net) with a novel boundary variance loss, a spatial
discounted reconstruction loss, and a perceptual loss to boost the stealthiness
further. Experiments demonstrate that our attack can generate adversarial
patches with improved visual quality, better stealthiness, and stronger
transferability than state-of-the-art adversarial patch attacks and semantic
attacks.
Related papers
- CapGen:An Environment-Adaptive Generator of Adversarial Patches [12.042510965650205]
Adversarial patches, often used to provide physical stealth protection for critical assets, usually neglect the need for visual harmony with the background environment.
We introduce the Camouflaged Adrialversa Pattern Generator (CAPGen), a novel approach that leverages specific base colors from the surrounding environment.
This paper is the first to comprehensively examine the roles played by patterns and colors in the context of adversarial patches.
arXiv Detail & Related papers (2024-12-10T07:24:24Z) - DiffPatch: Generating Customizable Adversarial Patches using Diffusion Model [88.14122962946858]
We propose a novel diffusion-based customizable patch generation framework termed DiffPatch.
Our approach enables users to utilize a reference image as the source, rather than starting from random noise.
We have created a physical adversarial T-shirt dataset, AdvPatch-1K, specifically targeting YOLOv5s.
arXiv Detail & Related papers (2024-12-02T12:30:35Z) - Imperceptible Face Forgery Attack via Adversarial Semantic Mask [59.23247545399068]
We propose an Adversarial Semantic Mask Attack framework (ASMA) which can generate adversarial examples with good transferability and invisibility.
Specifically, we propose a novel adversarial semantic mask generative model, which can constrain generated perturbations in local semantic regions for good stealthiness.
arXiv Detail & Related papers (2024-06-16T10:38:11Z) - Adv-Attribute: Inconspicuous and Transferable Adversarial Attack on Face
Recognition [111.1952945740271]
Adversarial Attributes (Adv-Attribute) is designed to generate inconspicuous and transferable attacks on face recognition.
Experiments on the FFHQ and CelebA-HQ datasets show that the proposed Adv-Attribute method achieves the state-of-the-art attacking success rates.
arXiv Detail & Related papers (2022-10-13T09:56:36Z) - Adversarial Pixel Restoration as a Pretext Task for Transferable
Perturbations [54.1807206010136]
Transferable adversarial attacks optimize adversaries from a pretrained surrogate model and known label space to fool the unknown black-box models.
We propose Adversarial Pixel Restoration as a self-supervised alternative to train an effective surrogate model from scratch.
Our training approach is based on a min-max objective which reduces overfitting via an adversarial objective.
arXiv Detail & Related papers (2022-07-18T17:59:58Z) - Towards Transferable Adversarial Attacks on Vision Transformers [110.55845478440807]
Vision transformers (ViTs) have demonstrated impressive performance on a series of computer vision tasks, yet they still suffer from adversarial examples.
We introduce a dual attack framework, which contains a Pay No Attention (PNA) attack and a PatchOut attack, to improve the transferability of adversarial samples across different ViTs.
arXiv Detail & Related papers (2021-09-09T11:28:25Z) - Inconspicuous Adversarial Patches for Fooling Image Recognition Systems
on Mobile Devices [8.437172062224034]
A variant of adversarial examples, called adversarial patch, draws researchers' attention due to its strong attack abilities.
We propose an approach to generate adversarial patches with one single image.
Our approach shows the strong attack abilities in white-box settings and the excellent transferability in black-box settings.
arXiv Detail & Related papers (2021-06-29T09:39:34Z) - Generating Adversarial yet Inconspicuous Patches with a Single Image [15.217367754000913]
We propose an approach to gen-erate adversarial yet inconspicuous patches with onesingle image.
In our approach, adversarial patches areproduced in a coarse-to-fine way with multiple scalesof generators and discriminators.
Our ap-proach shows strong attacking ability in both the white-box and black-box setting.
arXiv Detail & Related papers (2020-09-21T11:56:01Z) - Patch-wise Attack for Fooling Deep Neural Network [153.59832333877543]
We propose a patch-wise iterative algorithm -- a black-box attack towards mainstream normally trained and defense models.
We significantly improve the success rate by 9.2% for defense models and 3.7% for normally trained models on average.
arXiv Detail & Related papers (2020-07-14T01:50:22Z) - Towards Feature Space Adversarial Attack [18.874224858723494]
We propose a new adversarial attack to Deep Neural Networks for image classification.
Our attack focuses on perturbing abstract features, more specifically, features that denote styles.
We show that our attack can generate adversarial samples that are more natural-looking than the state-of-the-art attacks.
arXiv Detail & Related papers (2020-04-26T13:56:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.