Generating Adversarial yet Inconspicuous Patches with a Single Image
- URL: http://arxiv.org/abs/2009.09774v2
- Date: Wed, 21 Apr 2021 12:05:48 GMT
- Title: Generating Adversarial yet Inconspicuous Patches with a Single Image
- Authors: Jinqi Luo, Tao Bai, Jun Zhao
- Abstract summary: We propose an approach to gen-erate adversarial yet inconspicuous patches with onesingle image.
In our approach, adversarial patches areproduced in a coarse-to-fine way with multiple scalesof generators and discriminators.
Our ap-proach shows strong attacking ability in both the white-box and black-box setting.
- Score: 15.217367754000913
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks have been shown vulnerable toadversarial patches, where
exotic patterns can resultin models wrong prediction. Nevertheless, existing
ap-proaches to adversarial patch generation hardly con-sider the contextual
consistency between patches andthe image background, causing such patches to be
eas-ily detected and adversarial attacks to fail. On the otherhand, these
methods require a large amount of data fortraining, which is computationally
expensive. To over-come these challenges, we propose an approach to gen-erate
adversarial yet inconspicuous patches with onesingle image. In our approach,
adversarial patches areproduced in a coarse-to-fine way with multiple scalesof
generators and discriminators. Contextual informa-tion is encoded during the
Min-Max training to makepatches consistent with surroundings. The selection
ofpatch location is based on the perceptual sensitivity ofvictim models.
Through extensive experiments, our ap-proach shows strong attacking ability in
both the white-box and black-box setting. Experiments on saliency de-tection
and user evaluation indicate that our adversar-ial patches can evade human
observations, demonstratethe inconspicuousness of our approach. Lastly, we
showthat our approach preserves the attack ability in thephysical world.
Related papers
- AdvQDet: Detecting Query-Based Adversarial Attacks with Adversarial Contrastive Prompt Tuning [93.77763753231338]
Adversarial Contrastive Prompt Tuning (ACPT) is proposed to fine-tune the CLIP image encoder to extract similar embeddings for any two intermediate adversarial queries.
We show that ACPT can detect 7 state-of-the-art query-based attacks with $>99%$ detection rate within 5 shots.
We also show that ACPT is robust to 3 types of adaptive attacks.
arXiv Detail & Related papers (2024-08-04T09:53:50Z) - Imperceptible Face Forgery Attack via Adversarial Semantic Mask [59.23247545399068]
We propose an Adversarial Semantic Mask Attack framework (ASMA) which can generate adversarial examples with good transferability and invisibility.
Specifically, we propose a novel adversarial semantic mask generative model, which can constrain generated perturbations in local semantic regions for good stealthiness.
arXiv Detail & Related papers (2024-06-16T10:38:11Z) - Towards Robust Image Stitching: An Adaptive Resistance Learning against
Compatible Attacks [66.98297584796391]
Image stitching seamlessly integrates images captured from varying perspectives into a single wide field-of-view image.
Given a pair of captured images, subtle perturbations and distortions which go unnoticed by the human visual system tend to attack the correspondence matching.
This paper presents the first attempt to improve the robustness of image stitching against adversarial attacks.
arXiv Detail & Related papers (2024-02-25T02:36:33Z) - Generating Visually Realistic Adversarial Patch [5.41648734119775]
A high-quality adversarial patch should be realistic, position irrelevant, and printable to be deployed in the physical world.
We propose an effective attack called VRAP, to generate visually realistic adversarial patches.
VRAP constrains the patch in the neighborhood of a real image to ensure the visual reality, optimize the patch at the poorest position for position irrelevance, and adopts Total Variance loss as well as gamma transformation to make the generated patch printable without losing information.
arXiv Detail & Related papers (2023-12-05T11:07:39Z) - Generating Transferable and Stealthy Adversarial Patch via
Attention-guided Adversarial Inpainting [12.974292128917222]
We propose an innovative two-stage adversarial patch attack called Adv-Inpainting.
In the first stage, we extract style features and identity features from the attacker and target faces, respectively.
The proposed layer can adaptively fuse identity and style embeddings by fully exploiting priority contextual information.
In the second stage, we design an Adversarial Patch Refinement Network (APR-Net) with a novel boundary variance loss.
arXiv Detail & Related papers (2023-08-10T03:44:10Z) - Feasibility of Inconspicuous GAN-generated Adversarial Patches against
Object Detection [3.395452700023097]
In this work, we have evaluated the existing approaches to generate inconspicuous patches.
We have evaluated two approaches to generate naturalistic patches: by incorporating patch generation into the GAN training process and by using the pretrained GAN.
Our experiments have shown, that using a pre-trained GAN helps to gain realistic-looking patches while preserving the performance similar to conventional adversarial patches.
arXiv Detail & Related papers (2022-07-15T08:48:40Z) - Inconspicuous Adversarial Patches for Fooling Image Recognition Systems
on Mobile Devices [8.437172062224034]
A variant of adversarial examples, called adversarial patch, draws researchers' attention due to its strong attack abilities.
We propose an approach to generate adversarial patches with one single image.
Our approach shows the strong attack abilities in white-box settings and the excellent transferability in black-box settings.
arXiv Detail & Related papers (2021-06-29T09:39:34Z) - Bias-based Universal Adversarial Patch Attack for Automatic Check-out [59.355948824578434]
Adversarial examples are inputs with imperceptible perturbations that easily misleading deep neural networks(DNNs)
Existing strategies failed to generate adversarial patches with strong generalization ability.
This paper proposes a bias-based framework to generate class-agnostic universal adversarial patches with strong generalization ability.
arXiv Detail & Related papers (2020-05-19T07:38:54Z) - Adversarial Training against Location-Optimized Adversarial Patches [84.96938953835249]
adversarial patches: clearly visible, but adversarially crafted rectangular patches in images.
We first devise a practical approach to obtain adversarial patches while actively optimizing their location within the image.
We apply adversarial training on these location-optimized adversarial patches and demonstrate significantly improved robustness on CIFAR10 and GTSRB.
arXiv Detail & Related papers (2020-05-05T16:17:00Z) - Temporal Sparse Adversarial Attack on Sequence-based Gait Recognition [56.844587127848854]
We demonstrate that the state-of-the-art gait recognition model is vulnerable to such attacks.
We employ a generative adversarial network based architecture to semantically generate adversarial high-quality gait silhouettes or video frames.
The experimental results show that if only one-fortieth of the frames are attacked, the accuracy of the target model drops dramatically.
arXiv Detail & Related papers (2020-02-22T10:08:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.