Category-wise Attack: Transferable Adversarial Examples for Anchor Free
Object Detection
- URL: http://arxiv.org/abs/2003.04367v4
- Date: Tue, 23 Jun 2020 00:14:15 GMT
- Title: Category-wise Attack: Transferable Adversarial Examples for Anchor Free
Object Detection
- Authors: Quanyu Liao, Xin Wang, Bin Kong, Siwei Lyu, Youbing Yin, Qi Song, Xi
Wu
- Abstract summary: We present an effective and efficient algorithm to generate adversarial examples to attack anchor-free object models.
Surprisingly, the generated adversarial examples it not only able to effectively attack the targeted anchor-free object detector but also to be transferred to attack other object detectors.
- Score: 38.813947369401525
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks have been demonstrated to be vulnerable to adversarial
attacks: subtle perturbations can completely change the classification results.
Their vulnerability has led to a surge of research in this direction. However,
most works dedicated to attacking anchor-based object detection models. In this
work, we aim to present an effective and efficient algorithm to generate
adversarial examples to attack anchor-free object models based on two
approaches. First, we conduct category-wise instead of instance-wise attacks on
the object detectors. Second, we leverage the high-level semantic information
to generate the adversarial examples. Surprisingly, the generated adversarial
examples it not only able to effectively attack the targeted anchor-free object
detector but also to be transferred to attack other object detectors, even
anchor-based detectors such as Faster R-CNN.
Related papers
- Attacking by Aligning: Clean-Label Backdoor Attacks on Object Detection [24.271795745084123]
Deep neural networks (DNNs) have shown unprecedented success in object detection tasks.
Backdoor attacks on object detection tasks have not been properly investigated and explored.
We propose a simple yet effective backdoor attack method against object detection without modifying the ground truth annotations.
arXiv Detail & Related papers (2023-07-19T22:46:35Z) - Object-fabrication Targeted Attack for Object Detection [54.10697546734503]
adversarial attack for object detection contains targeted attack and untargeted attack.
New object-fabrication targeted attack mode can mislead detectors tofabricate extra false objects with specific target labels.
arXiv Detail & Related papers (2022-12-13T08:42:39Z) - On Trace of PGD-Like Adversarial Attacks [77.75152218980605]
Adversarial attacks pose safety and security concerns for deep learning applications.
We construct Adrial Response Characteristics (ARC) features to reflect the model's gradient consistency.
Our method is intuitive, light-weighted, non-intrusive, and data-undemanding.
arXiv Detail & Related papers (2022-05-19T14:26:50Z) - Towards A Conceptually Simple Defensive Approach for Few-shot
classifiers Against Adversarial Support Samples [107.38834819682315]
We study a conceptually simple approach to defend few-shot classifiers against adversarial attacks.
We propose a simple attack-agnostic detection method, using the concept of self-similarity and filtering.
Our evaluation on the miniImagenet (MI) and CUB datasets exhibit good attack detection performance.
arXiv Detail & Related papers (2021-10-24T05:46:03Z) - Transferable Adversarial Examples for Anchor Free Object Detection [44.7397139463144]
We present the first adversarial attack on anchor-free object detectors.
We leverage high-level semantic information to efficiently generate transferable adversarial examples.
Our proposed method achieves state-of-the-art performance and transferability.
arXiv Detail & Related papers (2021-06-03T06:38:15Z) - Hidden Backdoor Attack against Semantic Segmentation Models [60.0327238844584]
The emphbackdoor attack intends to embed hidden backdoors in deep neural networks (DNNs) by poisoning training data.
We propose a novel attack paradigm, the emphfine-grained attack, where we treat the target label from the object-level instead of the image-level.
Experiments show that the proposed methods can successfully attack semantic segmentation models by poisoning only a small proportion of training data.
arXiv Detail & Related papers (2021-03-06T05:50:29Z) - Fast Local Attack: Generating Local Adversarial Examples for Object
Detectors [38.813947369401525]
In our work, we leverage higher-level semantic information to generate high aggressive local perturbations for anchor-free object detectors.
As a result, it is less computationally intensive and achieves a higher black-box attack as well as transferring attack performance.
The adversarial examples generated by our method are not only capable of attacking anchor-free object detectors, but also able to be transferred to attack anchor-based object detector.
arXiv Detail & Related papers (2020-10-27T13:49:36Z) - Synthesizing Unrestricted False Positive Adversarial Objects Using
Generative Models [0.0]
Adversarial examples are data points misclassified by neural networks.
Recent work introduced the concept of unrestricted adversarial examples.
We introduce a new category of attacks that create unrestricted adversarial examples for object detection.
arXiv Detail & Related papers (2020-05-19T08:58:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.