Query-Efficient Decision-based Black-Box Patch Attack
- URL: http://arxiv.org/abs/2307.00477v1
- Date: Sun, 2 Jul 2023 05:15:43 GMT
- Title: Query-Efficient Decision-based Black-Box Patch Attack
- Authors: Zhaoyu Chen, Bo Li, Shuang Wu, Shouhong Ding, Wenqiang Zhang
- Abstract summary: We propose a differential evolutionary algorithm named DevoPatch for query-efficient decision-based patch attacks.
DevoPatch outperforms the state-of-the-art black-box patch attacks in terms of patch area and attack success rate.
We conduct the vulnerability evaluation of ViT and on image classification in the decision-based patch attack setting for the first time.
- Score: 36.043297146652414
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks (DNNs) have been showed to be highly vulnerable to
imperceptible adversarial perturbations. As a complementary type of adversary,
patch attacks that introduce perceptible perturbations to the images have
attracted the interest of researchers. Existing patch attacks rely on the
architecture of the model or the probabilities of predictions and perform
poorly in the decision-based setting, which can still construct a perturbation
with the minimal information exposed -- the top-1 predicted label. In this
work, we first explore the decision-based patch attack. To enhance the attack
efficiency, we model the patches using paired key-points and use targeted
images as the initialization of patches, and parameter optimizations are all
performed on the integer domain. Then, we propose a differential evolutionary
algorithm named DevoPatch for query-efficient decision-based patch attacks.
Experiments demonstrate that DevoPatch outperforms the state-of-the-art
black-box patch attacks in terms of patch area and attack success rate within a
given query budget on image classification and face verification. Additionally,
we conduct the vulnerability evaluation of ViT and MLP on image classification
in the decision-based patch attack setting for the first time. Using DevoPatch,
we can evaluate the robustness of models to black-box patch attacks. We believe
this method could inspire the design and deployment of robust vision models
based on various DNN architectures in the future.
Related papers
- AdvQDet: Detecting Query-Based Adversarial Attacks with Adversarial Contrastive Prompt Tuning [93.77763753231338]
Adversarial Contrastive Prompt Tuning (ACPT) is proposed to fine-tune the CLIP image encoder to extract similar embeddings for any two intermediate adversarial queries.
We show that ACPT can detect 7 state-of-the-art query-based attacks with $>99%$ detection rate within 5 shots.
We also show that ACPT is robust to 3 types of adaptive attacks.
arXiv Detail & Related papers (2024-08-04T09:53:50Z) - Efficient Decision-based Black-box Patch Attacks on Video Recognition [33.5640770588839]
This work first explores decision-based patch attacks on video models.
To achieve a query-efficient attack, we propose a spatial-temporal differential evolution framework.
STDE has demonstrated state-of-the-art performance in terms of threat, efficiency and imperceptibility.
arXiv Detail & Related papers (2023-03-21T15:08:35Z) - Task-agnostic Defense against Adversarial Patch Attacks [25.15948648034204]
Adversarial patch attacks mislead neural networks by injecting adversarial pixels within a designated local region.
We present PatchZero, a task-agnostic defense against white-box adversarial patches.
Our method achieves SOTA robust accuracy without any degradation in the benign performance.
arXiv Detail & Related papers (2022-07-05T03:49:08Z) - PatchGuard++: Efficient Provable Attack Detection against Adversarial
Patches [28.94435153159868]
An adversarial patch can arbitrarily manipulate image pixels within a restricted region to induce model misclassification.
Recent provably robust defenses generally follow the PatchGuard framework by using CNNs with small receptive fields.
We extend PatchGuard to PatchGuard++ for provably detecting the adversarial patch attack to boost both provable robust accuracy and clean accuracy.
arXiv Detail & Related papers (2021-04-26T14:22:33Z) - Attack Agnostic Adversarial Defense via Visual Imperceptible Bound [70.72413095698961]
This research aims to design a defense model that is robust within a certain bound against both seen and unseen adversarial attacks.
The proposed defense model is evaluated on the MNIST, CIFAR-10, and Tiny ImageNet databases.
The proposed algorithm is attack agnostic, i.e. it does not require any knowledge of the attack algorithm.
arXiv Detail & Related papers (2020-10-25T23:14:26Z) - Generating Adversarial yet Inconspicuous Patches with a Single Image [15.217367754000913]
We propose an approach to gen-erate adversarial yet inconspicuous patches with onesingle image.
In our approach, adversarial patches areproduced in a coarse-to-fine way with multiple scalesof generators and discriminators.
Our ap-proach shows strong attacking ability in both the white-box and black-box setting.
arXiv Detail & Related papers (2020-09-21T11:56:01Z) - Decision-based Universal Adversarial Attack [55.76371274622313]
In black-box setting, current universal adversarial attack methods utilize substitute models to generate the perturbation.
We propose an efficient Decision-based Universal Attack (DUAttack)
The effectiveness of DUAttack is validated through comparisons with other state-of-the-art attacks.
arXiv Detail & Related papers (2020-09-15T12:49:03Z) - Patch-wise Attack for Fooling Deep Neural Network [153.59832333877543]
We propose a patch-wise iterative algorithm -- a black-box attack towards mainstream normally trained and defense models.
We significantly improve the success rate by 9.2% for defense models and 3.7% for normally trained models on average.
arXiv Detail & Related papers (2020-07-14T01:50:22Z) - Bias-based Universal Adversarial Patch Attack for Automatic Check-out [59.355948824578434]
Adversarial examples are inputs with imperceptible perturbations that easily misleading deep neural networks(DNNs)
Existing strategies failed to generate adversarial patches with strong generalization ability.
This paper proposes a bias-based framework to generate class-agnostic universal adversarial patches with strong generalization ability.
arXiv Detail & Related papers (2020-05-19T07:38:54Z) - PatchAttack: A Black-box Texture-based Attack with Reinforcement
Learning [31.255179167694887]
Patch-based attacks introduce a perceptible but localized change to the input that induces misclassification.
Our proposed PatchAttack is query efficient and can break models for both targeted and non-targeted attacks.
arXiv Detail & Related papers (2020-04-12T19:31:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.