Fast Local Attack: Generating Local Adversarial Examples for Object
Detectors
- URL: http://arxiv.org/abs/2010.14291v1
- Date: Tue, 27 Oct 2020 13:49:36 GMT
- Title: Fast Local Attack: Generating Local Adversarial Examples for Object
Detectors
- Authors: Quanyu Liao, Xin Wang, Bin Kong, Siwei Lyu, Youbing Yin, Qi Song and
Xi Wu
- Abstract summary: In our work, we leverage higher-level semantic information to generate high aggressive local perturbations for anchor-free object detectors.
As a result, it is less computationally intensive and achieves a higher black-box attack as well as transferring attack performance.
The adversarial examples generated by our method are not only capable of attacking anchor-free object detectors, but also able to be transferred to attack anchor-based object detector.
- Score: 38.813947369401525
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The deep neural network is vulnerable to adversarial examples. Adding
imperceptible adversarial perturbations to images is enough to make them fail.
Most existing research focuses on attacking image classifiers or anchor-based
object detectors, but they generate globally perturbation on the whole image,
which is unnecessary. In our work, we leverage higher-level semantic
information to generate high aggressive local perturbations for anchor-free
object detectors. As a result, it is less computationally intensive and
achieves a higher black-box attack as well as transferring attack performance.
The adversarial examples generated by our method are not only capable of
attacking anchor-free object detectors, but also able to be transferred to
attack anchor-based object detector.
Related papers
- Object-fabrication Targeted Attack for Object Detection [54.10697546734503]
adversarial attack for object detection contains targeted attack and untargeted attack.
New object-fabrication targeted attack mode can mislead detectors tofabricate extra false objects with specific target labels.
arXiv Detail & Related papers (2022-12-13T08:42:39Z) - Zero-Query Transfer Attacks on Context-Aware Object Detectors [95.18656036716972]
Adversarial attacks perturb images such that a deep neural network produces incorrect classification results.
A promising approach to defend against adversarial attacks on natural multi-object scenes is to impose a context-consistency check.
We present the first approach for generating context-consistent adversarial attacks that can evade the context-consistency check.
arXiv Detail & Related papers (2022-03-29T04:33:06Z) - Context-Aware Transfer Attacks for Object Detection [51.65308857232767]
We present a new approach to generate context-aware attacks for object detectors.
We show that by using co-occurrence of objects and their relative locations and sizes as context information, we can successfully generate targeted mis-categorization attacks.
arXiv Detail & Related papers (2021-12-06T18:26:39Z) - Transferable Adversarial Examples for Anchor Free Object Detection [44.7397139463144]
We present the first adversarial attack on anchor-free object detectors.
We leverage high-level semantic information to efficiently generate transferable adversarial examples.
Our proposed method achieves state-of-the-art performance and transferability.
arXiv Detail & Related papers (2021-06-03T06:38:15Z) - Sparse Adversarial Attack to Object Detection [0.8702432681310401]
We propose Sparse Adversarial Attack (SAA) which enables adversaries to perform effective evasion attack on detectors with bounded emphl$_0$ norm.
Experiment results on YOLOv4 and FasterRCNN reveal the effectiveness of our method.
arXiv Detail & Related papers (2020-12-26T07:52:28Z) - DPAttack: Diffused Patch Attacks against Universal Object Detection [66.026630370248]
Adversarial attacks against object detection can be divided into two categories, whole-pixel attacks and patch attacks.
We propose a diffused patch attack (textbfDPAttack) to fool object detectors by diffused patches of asteroid-shaped or grid-shape.
Experiments show that our DPAttack can successfully fool most object detectors with diffused patches.
arXiv Detail & Related papers (2020-10-16T04:48:24Z) - Synthesizing Unrestricted False Positive Adversarial Objects Using
Generative Models [0.0]
Adversarial examples are data points misclassified by neural networks.
Recent work introduced the concept of unrestricted adversarial examples.
We introduce a new category of attacks that create unrestricted adversarial examples for object detection.
arXiv Detail & Related papers (2020-05-19T08:58:58Z) - Category-wise Attack: Transferable Adversarial Examples for Anchor Free
Object Detection [38.813947369401525]
We present an effective and efficient algorithm to generate adversarial examples to attack anchor-free object models.
Surprisingly, the generated adversarial examples it not only able to effectively attack the targeted anchor-free object detector but also to be transferred to attack other object detectors.
arXiv Detail & Related papers (2020-02-10T04:49:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.