Generating Visually Realistic Adversarial Patch
- URL: http://arxiv.org/abs/2312.03030v1
- Date: Tue, 5 Dec 2023 11:07:39 GMT
- Title: Generating Visually Realistic Adversarial Patch
- Authors: Xiaosen Wang, Kunyu Wang
- Abstract summary: A high-quality adversarial patch should be realistic, position irrelevant, and printable to be deployed in the physical world.
We propose an effective attack called VRAP, to generate visually realistic adversarial patches.
VRAP constrains the patch in the neighborhood of a real image to ensure the visual reality, optimize the patch at the poorest position for position irrelevance, and adopts Total Variance loss as well as gamma transformation to make the generated patch printable without losing information.
- Score: 5.41648734119775
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks (DNNs) are vulnerable to various types of adversarial
examples, bringing huge threats to security-critical applications. Among these,
adversarial patches have drawn increasing attention due to their good
applicability to fool DNNs in the physical world. However, existing works often
generate patches with meaningless noise or patterns, making it conspicuous to
humans. To address this issue, we explore how to generate visually realistic
adversarial patches to fool DNNs. Firstly, we analyze that a high-quality
adversarial patch should be realistic, position irrelevant, and printable to be
deployed in the physical world. Based on this analysis, we propose an effective
attack called VRAP, to generate visually realistic adversarial patches.
Specifically, VRAP constrains the patch in the neighborhood of a real image to
ensure the visual reality, optimizes the patch at the poorest position for
position irrelevance, and adopts Total Variance loss as well as gamma
transformation to make the generated patch printable without losing
information. Empirical evaluations on the ImageNet dataset demonstrate that the
proposed VRAP exhibits outstanding attack performance in the digital world.
Moreover, the generated adversarial patches can be disguised as the scrawl or
logo in the physical world to fool the deep models without being detected,
bringing significant threats to DNNs-enabled applications.
Related papers
- Towards Robust Image Stitching: An Adaptive Resistance Learning against
Compatible Attacks [66.98297584796391]
Image stitching seamlessly integrates images captured from varying perspectives into a single wide field-of-view image.
Given a pair of captured images, subtle perturbations and distortions which go unnoticed by the human visual system tend to attack the correspondence matching.
This paper presents the first attempt to improve the robustness of image stitching against adversarial attacks.
arXiv Detail & Related papers (2024-02-25T02:36:33Z) - Random Position Adversarial Patch for Vision Transformers [0.0]
This paper proposes a novel method for generating an adversarial patch (G-Patch)
Instead of directly optimizing the patch using gradients, we employ a GAN-like structure to generate the adversarial patch.
Experiments show the effectiveness of the adversarial patch in achieving universal attacks on vision transformers, both in digital and physical-world scenarios.
arXiv Detail & Related papers (2023-07-09T00:08:34Z) - Task-agnostic Defense against Adversarial Patch Attacks [25.15948648034204]
Adversarial patch attacks mislead neural networks by injecting adversarial pixels within a designated local region.
We present PatchZero, a task-agnostic defense against white-box adversarial patches.
Our method achieves SOTA robust accuracy without any degradation in the benign performance.
arXiv Detail & Related papers (2022-07-05T03:49:08Z) - On the Real-World Adversarial Robustness of Real-Time Semantic
Segmentation Models for Autonomous Driving [59.33715889581687]
The existence of real-world adversarial examples (commonly in the form of patches) poses a serious threat for the use of deep learning models in safety-critical computer vision tasks.
This paper presents an evaluation of the robustness of semantic segmentation models when attacked with different types of adversarial patches.
A novel loss function is proposed to improve the capabilities of attackers in inducing a misclassification of pixels.
arXiv Detail & Related papers (2022-01-05T22:33:43Z) - Evaluating the Robustness of Semantic Segmentation for Autonomous
Driving against Real-World Adversarial Patch Attacks [62.87459235819762]
In a real-world scenario like autonomous driving, more attention should be devoted to real-world adversarial examples (RWAEs)
This paper presents an in-depth evaluation of the robustness of popular SS models by testing the effects of both digital and real-world adversarial patches.
arXiv Detail & Related papers (2021-08-13T11:49:09Z) - Inconspicuous Adversarial Patches for Fooling Image Recognition Systems
on Mobile Devices [8.437172062224034]
A variant of adversarial examples, called adversarial patch, draws researchers' attention due to its strong attack abilities.
We propose an approach to generate adversarial patches with one single image.
Our approach shows the strong attack abilities in white-box settings and the excellent transferability in black-box settings.
arXiv Detail & Related papers (2021-06-29T09:39:34Z) - Learning Transferable 3D Adversarial Cloaks for Deep Trained Detectors [72.7633556669675]
This paper presents a novel patch-based adversarial attack pipeline that trains adversarial patches on 3D human meshes.
Unlike existing adversarial patches, our new 3D adversarial patch is shown to fool state-of-the-art deep object detectors robustly under varying views.
arXiv Detail & Related papers (2021-04-22T14:36:08Z) - Generating Adversarial yet Inconspicuous Patches with a Single Image [15.217367754000913]
We propose an approach to gen-erate adversarial yet inconspicuous patches with onesingle image.
In our approach, adversarial patches areproduced in a coarse-to-fine way with multiple scalesof generators and discriminators.
Our ap-proach shows strong attacking ability in both the white-box and black-box setting.
arXiv Detail & Related papers (2020-09-21T11:56:01Z) - Bias-based Universal Adversarial Patch Attack for Automatic Check-out [59.355948824578434]
Adversarial examples are inputs with imperceptible perturbations that easily misleading deep neural networks(DNNs)
Existing strategies failed to generate adversarial patches with strong generalization ability.
This paper proposes a bias-based framework to generate class-agnostic universal adversarial patches with strong generalization ability.
arXiv Detail & Related papers (2020-05-19T07:38:54Z) - Adversarial Training against Location-Optimized Adversarial Patches [84.96938953835249]
adversarial patches: clearly visible, but adversarially crafted rectangular patches in images.
We first devise a practical approach to obtain adversarial patches while actively optimizing their location within the image.
We apply adversarial training on these location-optimized adversarial patches and demonstrate significantly improved robustness on CIFAR10 and GTSRB.
arXiv Detail & Related papers (2020-05-05T16:17:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.