A Large-scale Multiple-objective Method for Black-box Attack against
Object Detection
- URL: http://arxiv.org/abs/2209.07790v1
- Date: Fri, 16 Sep 2022 08:36:42 GMT
- Title: A Large-scale Multiple-objective Method for Black-box Attack against
Object Detection
- Authors: Siyuan Liang, Longkang Li, Yanbo Fan, Xiaojun Jia, Jingzhi Li, Baoyuan
Wu, and Xiaochun Cao
- Abstract summary: We propose to minimize the true positive rate and maximize the false positive rate, which can encourage more false positive objects to block the generation of new true positive bounding boxes.
We extend the standard Genetic Algorithm with Random Subset selection and Divide-and-Conquer, called GARSDC, which significantly improves the efficiency.
Compared with the state-of-art attack methods, GARSDC decreases by an average 12.0 in the mAP and queries by about 1000 times in extensive experiments.
- Score: 70.00150794625053
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent studies have shown that detectors based on deep models are vulnerable
to adversarial examples, even in the black-box scenario where the attacker
cannot access the model information. Most existing attack methods aim to
minimize the true positive rate, which often shows poor attack performance, as
another sub-optimal bounding box may be detected around the attacked bounding
box to be the new true positive one. To settle this challenge, we propose to
minimize the true positive rate and maximize the false positive rate, which can
encourage more false positive objects to block the generation of new true
positive bounding boxes. It is modeled as a multi-objective optimization (MOP)
problem, of which the generic algorithm can search the Pareto-optimal. However,
our task has more than two million decision variables, leading to low searching
efficiency. Thus, we extend the standard Genetic Algorithm with Random Subset
selection and Divide-and-Conquer, called GARSDC, which significantly improves
the efficiency. Moreover, to alleviate the sensitivity to population quality in
generic algorithms, we generate a gradient-prior initial population, utilizing
the transferability between different detectors with similar backbones.
Compared with the state-of-art attack methods, GARSDC decreases by an average
12.0 in the mAP and queries by about 1000 times in extensive experiments. Our
codes can be found at https://github.com/LiangSiyuan21/ GARSDC.
Related papers
- GE-AdvGAN: Improving the transferability of adversarial samples by
gradient editing-based adversarial generative model [69.71629949747884]
Adversarial generative models, such as Generative Adversarial Networks (GANs), are widely applied for generating various types of data.
In this work, we propose a novel algorithm named GE-AdvGAN to enhance the transferability of adversarial samples.
arXiv Detail & Related papers (2024-01-11T16:43:16Z) - Wasserstein distributional robustness of neural networks [9.79503506460041]
Deep neural networks are known to be vulnerable to adversarial attacks (AA)
For an image recognition task, this means that a small perturbation of the original can result in the image being misclassified.
We re-cast the problem using techniques of Wasserstein distributionally robust optimization (DRO) and obtain novel contributions.
arXiv Detail & Related papers (2023-06-16T13:41:24Z) - Boosting Adversarial Transferability via Fusing Logits of Top-1
Decomposed Feature [36.78292952798531]
We propose a Singular Value Decomposition (SVD)-based feature-level attack method.
Our approach is inspired by the discovery that eigenvectors associated with the larger singular values from the middle layer features exhibit superior generalization and attention properties.
arXiv Detail & Related papers (2023-05-02T12:27:44Z) - Versatile Weight Attack via Flipping Limited Bits [68.45224286690932]
We study a novel attack paradigm, which modifies model parameters in the deployment stage.
Considering the effectiveness and stealthiness goals, we provide a general formulation to perform the bit-flip based weight attack.
We present two cases of the general formulation with different malicious purposes, i.e., single sample attack (SSA) and triggered samples attack (TSA)
arXiv Detail & Related papers (2022-07-25T03:24:58Z) - Sparse and Imperceptible Adversarial Attack via a Homotopy Algorithm [93.80082636284922]
Sparse adversarial attacks can fool deep networks (DNNs) by only perturbing a few pixels.
Recent efforts combine it with another l_infty perturbation on magnitudes.
We propose a homotopy algorithm to tackle the sparsity and neural perturbation framework.
arXiv Detail & Related papers (2021-06-10T20:11:36Z) - Transferable Sparse Adversarial Attack [62.134905824604104]
We introduce a generator architecture to alleviate the overfitting issue and thus efficiently craft transferable sparse adversarial examples.
Our method achieves superior inference speed, 700$times$ faster than other optimization-based methods.
arXiv Detail & Related papers (2021-05-31T06:44:58Z) - Targeted Attack against Deep Neural Networks via Flipping Limited Weight
Bits [55.740716446995805]
We study a novel attack paradigm, which modifies model parameters in the deployment stage for malicious purposes.
Our goal is to misclassify a specific sample into a target class without any sample modification.
By utilizing the latest technique in integer programming, we equivalently reformulate this BIP problem as a continuous optimization problem.
arXiv Detail & Related papers (2021-02-21T03:13:27Z) - TEAM: We Need More Powerful Adversarial Examples for DNNs [6.7943676146532885]
Adversarial examples can lead to misclassification of deep neural networks (DNNs)
We propose a novel method to generate more powerful adversarial examples than previous methods.
Our method can reliably produce adversarial examples with 100% attack success rate (ASR) while only by smaller perturbations.
arXiv Detail & Related papers (2020-07-31T04:11:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.