PatchAttack: A Black-box Texture-based Attack with Reinforcement
Learning
- URL: http://arxiv.org/abs/2004.05682v2
- Date: Sun, 19 Jul 2020 22:36:25 GMT
- Title: PatchAttack: A Black-box Texture-based Attack with Reinforcement
Learning
- Authors: Chenglin Yang, Adam Kortylewski, Cihang Xie, Yinzhi Cao, and Alan
Yuille
- Abstract summary: Patch-based attacks introduce a perceptible but localized change to the input that induces misclassification.
Our proposed PatchAttack is query efficient and can break models for both targeted and non-targeted attacks.
- Score: 31.255179167694887
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Patch-based attacks introduce a perceptible but localized change to the input
that induces misclassification. A limitation of current patch-based black-box
attacks is that they perform poorly for targeted attacks, and even for the less
challenging non-targeted scenarios, they require a large number of queries. Our
proposed PatchAttack is query efficient and can break models for both targeted
and non-targeted attacks. PatchAttack induces misclassifications by
superimposing small textured patches on the input image. We parametrize the
appearance of these patches by a dictionary of class-specific textures. This
texture dictionary is learned by clustering Gram matrices of feature
activations from a VGG backbone. PatchAttack optimizes the position and texture
parameters of each patch using reinforcement learning. Our experiments show
that PatchAttack achieves > 99% success rate on ImageNet for a wide range of
architectures, while only manipulating 3% of the image for non-targeted attacks
and 10% on average for targeted attacks. Furthermore, we show that PatchAttack
circumvents state-of-the-art adversarial defense methods successfully.
Related papers
- Semi-supervised 3D Object Detection with PatchTeacher and PillarMix [71.4908268136439]
Current semi-supervised 3D object detection methods typically use a teacher to generate pseudo labels for a student.
We propose PatchTeacher, which focuses on partial scene 3D object detection to provide high-quality pseudo labels for the student.
We introduce three key techniques, i.e., Patch Normalizer, Quadrant Align, and Fovea Selection, to improve the performance of PatchTeacher.
arXiv Detail & Related papers (2024-07-13T06:58:49Z) - Query-Efficient Decision-based Black-Box Patch Attack [36.043297146652414]
We propose a differential evolutionary algorithm named DevoPatch for query-efficient decision-based patch attacks.
DevoPatch outperforms the state-of-the-art black-box patch attacks in terms of patch area and attack success rate.
We conduct the vulnerability evaluation of ViT and on image classification in the decision-based patch attack setting for the first time.
arXiv Detail & Related papers (2023-07-02T05:15:43Z) - Suppress with a Patch: Revisiting Universal Adversarial Patch Attacks
against Object Detection [2.577744341648085]
Adversarial patch-based attacks aim to fool a neural network with an intentionally generated noise.
In this work, we perform an in-depth analysis of different patch generation parameters.
Experiments have shown, that inserting a patch inside a window of increasing size during training leads to a significant increase in attack strength.
arXiv Detail & Related papers (2022-09-27T12:59:19Z) - Task-agnostic Defense against Adversarial Patch Attacks [25.15948648034204]
Adversarial patch attacks mislead neural networks by injecting adversarial pixels within a designated local region.
We present PatchZero, a task-agnostic defense against white-box adversarial patches.
Our method achieves SOTA robust accuracy without any degradation in the benign performance.
arXiv Detail & Related papers (2022-07-05T03:49:08Z) - Segment and Complete: Defending Object Detectors against Adversarial
Patch Attacks with Robust Patch Detection [142.24869736769432]
Adversarial patch attacks pose a serious threat to state-of-the-art object detectors.
We propose Segment and Complete defense (SAC), a framework for defending object detectors against patch attacks.
We show SAC can significantly reduce the targeted attack success rate of physical patch attacks.
arXiv Detail & Related papers (2021-12-08T19:18:48Z) - PatchGuard++: Efficient Provable Attack Detection against Adversarial
Patches [28.94435153159868]
An adversarial patch can arbitrarily manipulate image pixels within a restricted region to induce model misclassification.
Recent provably robust defenses generally follow the PatchGuard framework by using CNNs with small receptive fields.
We extend PatchGuard to PatchGuard++ for provably detecting the adversarial patch attack to boost both provable robust accuracy and clean accuracy.
arXiv Detail & Related papers (2021-04-26T14:22:33Z) - Patch-wise Attack for Fooling Deep Neural Network [153.59832333877543]
We propose a patch-wise iterative algorithm -- a black-box attack towards mainstream normally trained and defense models.
We significantly improve the success rate by 9.2% for defense models and 3.7% for normally trained models on average.
arXiv Detail & Related papers (2020-07-14T01:50:22Z) - Bias-based Universal Adversarial Patch Attack for Automatic Check-out [59.355948824578434]
Adversarial examples are inputs with imperceptible perturbations that easily misleading deep neural networks(DNNs)
Existing strategies failed to generate adversarial patches with strong generalization ability.
This paper proposes a bias-based framework to generate class-agnostic universal adversarial patches with strong generalization ability.
arXiv Detail & Related papers (2020-05-19T07:38:54Z) - PatchGuard: A Provably Robust Defense against Adversarial Patches via
Small Receptive Fields and Masking [46.03749650789915]
Localized adversarial patches aim to induce misclassification in machine learning models by arbitrarily modifying pixels within a restricted region of an image.
We propose a general defense framework called PatchGuard that can achieve high provable robustness while maintaining high clean accuracy against localized adversarial patches.
arXiv Detail & Related papers (2020-05-17T03:38:34Z) - (De)Randomized Smoothing for Certifiable Defense against Patch Attacks [136.79415677706612]
We introduce a certifiable defense against patch attacks that guarantees for a given image and patch attack size.
Our method is related to the broad class of randomized smoothing robustness schemes.
Our results effectively establish a new state-of-the-art of certifiable defense against patch attacks on CIFAR-10 and ImageNet.
arXiv Detail & Related papers (2020-02-25T08:39:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.