Attacking Important Pixels for Anchor-free Detectors
- URL: http://arxiv.org/abs/2301.11457v1
- Date: Thu, 26 Jan 2023 23:03:03 GMT
- Title: Attacking Important Pixels for Anchor-free Detectors
- Authors: Yunxu Xie, Shu Hu, Xin Wang, Quanyu Liao, Bin Zhu, Xi Wu, Siwei Lyu
- Abstract summary: Existing adversarial attacks on object detection focus on attacking anchor-based detectors.
We propose the first adversarial attack dedicated to anchor-free detectors.
Our proposed methods achieve state-of-the-art attack performance and transferability on both object detection and human pose estimation tasks.
- Score: 47.524554948433995
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks have been demonstrated to be vulnerable to adversarial
attacks: subtle perturbation can completely change the prediction result.
Existing adversarial attacks on object detection focus on attacking
anchor-based detectors, which may not work well for anchor-free detectors. In
this paper, we propose the first adversarial attack dedicated to anchor-free
detectors. It is a category-wise attack that attacks important pixels of all
instances of a category simultaneously. Our attack manifests in two forms,
sparse category-wise attack (SCA) and dense category-wise attack (DCA), that
minimize the $L_0$ and $L_\infty$ norm-based perturbations, respectively. For
DCA, we present three variants, DCA-G, DCA-L, and DCA-S, that select a global
region, a local region, and a semantic region, respectively, to attack. Our
experiments on large-scale benchmark datasets including PascalVOC, MS-COCO, and
MS-COCO Keypoints indicate that our proposed methods achieve state-of-the-art
attack performance and transferability on both object detection and human pose
estimation tasks.
Related papers
- AdvQDet: Detecting Query-Based Adversarial Attacks with Adversarial Contrastive Prompt Tuning [93.77763753231338]
Adversarial Contrastive Prompt Tuning (ACPT) is proposed to fine-tune the CLIP image encoder to extract similar embeddings for any two intermediate adversarial queries.
We show that ACPT can detect 7 state-of-the-art query-based attacks with $>99%$ detection rate within 5 shots.
We also show that ACPT is robust to 3 types of adaptive attacks.
arXiv Detail & Related papers (2024-08-04T09:53:50Z) - To Make Yourself Invisible with Adversarial Semantic Contours [47.755808439588094]
Adversarial Semantic Contour (ASC) is an estimate of a Bayesian formulation of sparse attack with a deceived prior of object contour.
We show that ASC can corrupt the prediction of 9 modern detectors with different architectures.
We conclude with cautions about contour being the common weakness of object detectors with various architecture.
arXiv Detail & Related papers (2023-03-01T07:22:39Z) - Universal Detection of Backdoor Attacks via Density-based Clustering and
Centroids Analysis [24.953032059932525]
We propose a Universal Defence against backdoor attacks based on Clustering and Centroids Analysis (CCA-UD)
The goal of the defence is to reveal whether a Deep Neural Network model is subject to a backdoor attack by inspecting the training dataset.
arXiv Detail & Related papers (2023-01-11T16:31:38Z) - Object-fabrication Targeted Attack for Object Detection [54.10697546734503]
adversarial attack for object detection contains targeted attack and untargeted attack.
New object-fabrication targeted attack mode can mislead detectors tofabricate extra false objects with specific target labels.
arXiv Detail & Related papers (2022-12-13T08:42:39Z) - Transferable Adversarial Examples for Anchor Free Object Detection [44.7397139463144]
We present the first adversarial attack on anchor-free object detectors.
We leverage high-level semantic information to efficiently generate transferable adversarial examples.
Our proposed method achieves state-of-the-art performance and transferability.
arXiv Detail & Related papers (2021-06-03T06:38:15Z) - Double Targeted Universal Adversarial Perturbations [83.60161052867534]
We introduce a double targeted universal adversarial perturbations (DT-UAPs) to bridge the gap between the instance-discriminative image-dependent perturbations and the generic universal perturbations.
We show the effectiveness of the proposed DTA algorithm on a wide range of datasets and also demonstrate its potential as a physical attack.
arXiv Detail & Related papers (2020-10-07T09:08:51Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z) - Category-wise Attack: Transferable Adversarial Examples for Anchor Free
Object Detection [38.813947369401525]
We present an effective and efficient algorithm to generate adversarial examples to attack anchor-free object models.
Surprisingly, the generated adversarial examples it not only able to effectively attack the targeted anchor-free object detector but also to be transferred to attack other object detectors.
arXiv Detail & Related papers (2020-02-10T04:49:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.