Transferable Adversarial Examples for Anchor Free Object Detection
- URL: http://arxiv.org/abs/2106.01618v2
- Date: Fri, 4 Jun 2021 01:59:22 GMT
- Title: Transferable Adversarial Examples for Anchor Free Object Detection
- Authors: Quanyu Liao, Xin Wang, Bin Kong, Siwei Lyu, Bin Zhu, Youbing Yin, Qi
Song, Xi Wu
- Abstract summary: We present the first adversarial attack on anchor-free object detectors.
We leverage high-level semantic information to efficiently generate transferable adversarial examples.
Our proposed method achieves state-of-the-art performance and transferability.
- Score: 44.7397139463144
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks have been demonstrated to be vulnerable to adversarial
attacks: subtle perturbation can completely change prediction result. The
vulnerability has led to a surge of research in this direction, including
adversarial attacks on object detection networks. However, previous studies are
dedicated to attacking anchor-based object detectors. In this paper, we present
the first adversarial attack on anchor-free object detectors. It conducts
category-wise, instead of previously instance-wise, attacks on object
detectors, and leverages high-level semantic information to efficiently
generate transferable adversarial examples, which can also be transferred to
attack other object detectors, even anchor-based detectors such as Faster
R-CNN. Experimental results on two benchmark datasets demonstrate that our
proposed method achieves state-of-the-art performance and transferability.
Related papers
- Attacking Important Pixels for Anchor-free Detectors [47.524554948433995]
Existing adversarial attacks on object detection focus on attacking anchor-based detectors.
We propose the first adversarial attack dedicated to anchor-free detectors.
Our proposed methods achieve state-of-the-art attack performance and transferability on both object detection and human pose estimation tasks.
arXiv Detail & Related papers (2023-01-26T23:03:03Z) - Object-fabrication Targeted Attack for Object Detection [54.10697546734503]
adversarial attack for object detection contains targeted attack and untargeted attack.
New object-fabrication targeted attack mode can mislead detectors tofabricate extra false objects with specific target labels.
arXiv Detail & Related papers (2022-12-13T08:42:39Z) - On Trace of PGD-Like Adversarial Attacks [77.75152218980605]
Adversarial attacks pose safety and security concerns for deep learning applications.
We construct Adrial Response Characteristics (ARC) features to reflect the model's gradient consistency.
Our method is intuitive, light-weighted, non-intrusive, and data-undemanding.
arXiv Detail & Related papers (2022-05-19T14:26:50Z) - Robust and Accurate Object Detection via Adversarial Learning [111.36192453882195]
This work augments the fine-tuning stage for object detectors by exploring adversarial examples.
Our approach boosts the performance of state-of-the-art EfficientDets by +1.1 mAP on the object detection benchmark.
arXiv Detail & Related papers (2021-03-23T19:45:26Z) - Learning to Separate Clusters of Adversarial Representations for Robust
Adversarial Detection [50.03939695025513]
We propose a new probabilistic adversarial detector motivated by a recently introduced non-robust feature.
In this paper, we consider the non-robust features as a common property of adversarial examples, and we deduce it is possible to find a cluster in representation space corresponding to the property.
This idea leads us to probability estimate distribution of adversarial representations in a separate cluster, and leverage the distribution for a likelihood based adversarial detector.
arXiv Detail & Related papers (2020-12-07T07:21:18Z) - Fast Local Attack: Generating Local Adversarial Examples for Object
Detectors [38.813947369401525]
In our work, we leverage higher-level semantic information to generate high aggressive local perturbations for anchor-free object detectors.
As a result, it is less computationally intensive and achieves a higher black-box attack as well as transferring attack performance.
The adversarial examples generated by our method are not only capable of attacking anchor-free object detectors, but also able to be transferred to attack anchor-based object detector.
arXiv Detail & Related papers (2020-10-27T13:49:36Z) - Relevance Attack on Detectors [24.318876747711055]
This paper focuses on high-transferable adversarial attacks on detectors, which are hard to attack in a black-box manner.
We are the first to suggest that the relevance map from interpreters for detectors is such a property.
Based on it, we design a Relevance Attack on Detectors (RAD), which achieves a state-of-the-art transferability.
arXiv Detail & Related papers (2020-08-16T02:44:25Z) - Synthesizing Unrestricted False Positive Adversarial Objects Using
Generative Models [0.0]
Adversarial examples are data points misclassified by neural networks.
Recent work introduced the concept of unrestricted adversarial examples.
We introduce a new category of attacks that create unrestricted adversarial examples for object detection.
arXiv Detail & Related papers (2020-05-19T08:58:58Z) - Category-wise Attack: Transferable Adversarial Examples for Anchor Free
Object Detection [38.813947369401525]
We present an effective and efficient algorithm to generate adversarial examples to attack anchor-free object models.
Surprisingly, the generated adversarial examples it not only able to effectively attack the targeted anchor-free object detector but also to be transferred to attack other object detectors.
arXiv Detail & Related papers (2020-02-10T04:49:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.