Generating Transferable and Stealthy Adversarial Patch via
Attention-guided Adversarial Inpainting
- URL: http://arxiv.org/abs/2308.05320v2
- Date: Sun, 1 Oct 2023 09:14:51 GMT
- Title: Generating Transferable and Stealthy Adversarial Patch via
Attention-guided Adversarial Inpainting
- Authors: Yanjie Li, Mingxing Duan, Xuelong Dai, Bin Xiao
- Abstract summary: We propose an innovative two-stage adversarial patch attack called Adv-Inpainting.
In the first stage, we extract style features and identity features from the attacker and target faces, respectively.
The proposed layer can adaptively fuse identity and style embeddings by fully exploiting priority contextual information.
In the second stage, we design an Adversarial Patch Refinement Network (APR-Net) with a novel boundary variance loss.
- Score: 12.974292128917222
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Adversarial patch attacks can fool the face recognition (FR) models via small
patches. However, previous adversarial patch attacks often result in unnatural
patterns that are easily noticeable. Generating transferable and stealthy
adversarial patches that can efficiently deceive the black-box FR models while
having good camouflage is challenging because of the huge stylistic difference
between the source and target images. To generate transferable,
natural-looking, and stealthy adversarial patches, we propose an innovative
two-stage attack called Adv-Inpainting, which extracts style features and
identity features from the attacker and target faces, respectively and then
fills the patches with misleading and inconspicuous content guided by attention
maps. In the first stage, we extract multi-scale style embeddings by a
pyramid-like network and identity embeddings by a pretrained FR model and
propose a novel Attention-guided Adaptive Instance Normalization layer (AAIN)
to merge them via background-patch cross-attention maps. The proposed layer can
adaptively fuse identity and style embeddings by fully exploiting priority
contextual information. In the second stage, we design an Adversarial Patch
Refinement Network (APR-Net) with a novel boundary variance loss, a spatial
discounted reconstruction loss, and a perceptual loss to boost the stealthiness
further. Experiments demonstrate that our attack can generate adversarial
patches with improved visual quality, better stealthiness, and stronger
transferability than state-of-the-art adversarial patch attacks and semantic
attacks.
Related papers
- Imperceptible Face Forgery Attack via Adversarial Semantic Mask [59.23247545399068]
We propose an Adversarial Semantic Mask Attack framework (ASMA) which can generate adversarial examples with good transferability and invisibility.
Specifically, we propose a novel adversarial semantic mask generative model, which can constrain generated perturbations in local semantic regions for good stealthiness.
arXiv Detail & Related papers (2024-06-16T10:38:11Z) - Environmental Matching Attack Against Unmanned Aerial Vehicles Object Detection [37.77615360932841]
Object detection techniques for Unmanned Aerial Vehicles rely on Deep Neural Networks (DNNs)
adversarial patches generated by existing algorithms in the UAV domain pay very little attention to the naturalness of adversarial patches.
We propose a new method named Environmental Matching Attack(EMA) to address the issue of optimizing the adversarial patch under the constraints of color.
arXiv Detail & Related papers (2024-05-13T09:56:57Z) - Towards Robust Image Stitching: An Adaptive Resistance Learning against
Compatible Attacks [66.98297584796391]
Image stitching seamlessly integrates images captured from varying perspectives into a single wide field-of-view image.
Given a pair of captured images, subtle perturbations and distortions which go unnoticed by the human visual system tend to attack the correspondence matching.
This paper presents the first attempt to improve the robustness of image stitching against adversarial attacks.
arXiv Detail & Related papers (2024-02-25T02:36:33Z) - Adv-Attribute: Inconspicuous and Transferable Adversarial Attack on Face
Recognition [111.1952945740271]
Adversarial Attributes (Adv-Attribute) is designed to generate inconspicuous and transferable attacks on face recognition.
Experiments on the FFHQ and CelebA-HQ datasets show that the proposed Adv-Attribute method achieves the state-of-the-art attacking success rates.
arXiv Detail & Related papers (2022-10-13T09:56:36Z) - Adversarial Pixel Restoration as a Pretext Task for Transferable
Perturbations [54.1807206010136]
Transferable adversarial attacks optimize adversaries from a pretrained surrogate model and known label space to fool the unknown black-box models.
We propose Adversarial Pixel Restoration as a self-supervised alternative to train an effective surrogate model from scratch.
Our training approach is based on a min-max objective which reduces overfitting via an adversarial objective.
arXiv Detail & Related papers (2022-07-18T17:59:58Z) - Task-agnostic Defense against Adversarial Patch Attacks [25.15948648034204]
Adversarial patch attacks mislead neural networks by injecting adversarial pixels within a designated local region.
We present PatchZero, a task-agnostic defense against white-box adversarial patches.
Our method achieves SOTA robust accuracy without any degradation in the benign performance.
arXiv Detail & Related papers (2022-07-05T03:49:08Z) - Towards Transferable Adversarial Attacks on Vision Transformers [110.55845478440807]
Vision transformers (ViTs) have demonstrated impressive performance on a series of computer vision tasks, yet they still suffer from adversarial examples.
We introduce a dual attack framework, which contains a Pay No Attention (PNA) attack and a PatchOut attack, to improve the transferability of adversarial samples across different ViTs.
arXiv Detail & Related papers (2021-09-09T11:28:25Z) - Inconspicuous Adversarial Patches for Fooling Image Recognition Systems
on Mobile Devices [8.437172062224034]
A variant of adversarial examples, called adversarial patch, draws researchers' attention due to its strong attack abilities.
We propose an approach to generate adversarial patches with one single image.
Our approach shows the strong attack abilities in white-box settings and the excellent transferability in black-box settings.
arXiv Detail & Related papers (2021-06-29T09:39:34Z) - Generating Adversarial yet Inconspicuous Patches with a Single Image [15.217367754000913]
We propose an approach to gen-erate adversarial yet inconspicuous patches with onesingle image.
In our approach, adversarial patches areproduced in a coarse-to-fine way with multiple scalesof generators and discriminators.
Our ap-proach shows strong attacking ability in both the white-box and black-box setting.
arXiv Detail & Related papers (2020-09-21T11:56:01Z) - Patch-wise Attack for Fooling Deep Neural Network [153.59832333877543]
We propose a patch-wise iterative algorithm -- a black-box attack towards mainstream normally trained and defense models.
We significantly improve the success rate by 9.2% for defense models and 3.7% for normally trained models on average.
arXiv Detail & Related papers (2020-07-14T01:50:22Z) - Towards Feature Space Adversarial Attack [18.874224858723494]
We propose a new adversarial attack to Deep Neural Networks for image classification.
Our attack focuses on perturbing abstract features, more specifically, features that denote styles.
We show that our attack can generate adversarial samples that are more natural-looking than the state-of-the-art attacks.
arXiv Detail & Related papers (2020-04-26T13:56:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.