Sparse Adversarial Attack to Object Detection
- URL: http://arxiv.org/abs/2012.13692v1
- Date: Sat, 26 Dec 2020 07:52:28 GMT
- Title: Sparse Adversarial Attack to Object Detection
- Authors: Jiayu Bao
- Abstract summary: We propose Sparse Adversarial Attack (SAA) which enables adversaries to perform effective evasion attack on detectors with bounded emphl$_0$ norm.
Experiment results on YOLOv4 and FasterRCNN reveal the effectiveness of our method.
- Score: 0.8702432681310401
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial examples have gained tons of attention in recent years. Many
adversarial attacks have been proposed to attack image classifiers, but few
work shift attention to object detectors. In this paper, we propose Sparse
Adversarial Attack (SAA) which enables adversaries to perform effective evasion
attack on detectors with bounded \emph{l$_{0}$} norm perturbation. We select
the fragile position of the image and designed evasion loss function for the
task. Experiment results on YOLOv4 and FasterRCNN reveal the effectiveness of
our method. In addition, our SAA shows great transferability across different
detectors in the black-box attack setting. Codes are available at
\emph{https://github.com/THUrssq/Tianchi04}.
Related papers
- X-Detect: Explainable Adversarial Patch Detection for Object Detectors
in Retail [38.10544338096162]
Existing methods for detecting adversarial attacks on object detectors have had difficulty detecting new real-life attacks.
We present X-Detect, a novel adversarial patch detector that can detect adversarial samples in real time.
X-Detect uses an ensemble of explainable-by-design detectors that utilize object extraction, scene manipulation, and feature transformation techniques.
arXiv Detail & Related papers (2023-06-14T10:35:21Z) - Object-fabrication Targeted Attack for Object Detection [54.10697546734503]
adversarial attack for object detection contains targeted attack and untargeted attack.
New object-fabrication targeted attack mode can mislead detectors tofabricate extra false objects with specific target labels.
arXiv Detail & Related papers (2022-12-13T08:42:39Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - Parallel Rectangle Flip Attack: A Query-based Black-box Attack against
Object Detection [89.08832589750003]
We propose a Parallel Rectangle Flip Attack (PRFA) via random search to avoid sub-optimal detection near the attacked region.
Our method can effectively and efficiently attack various popular object detectors, including anchor-based and anchor-free, and generate transferable adversarial examples.
arXiv Detail & Related papers (2022-01-22T06:00:17Z) - Context-Aware Transfer Attacks for Object Detection [51.65308857232767]
We present a new approach to generate context-aware attacks for object detectors.
We show that by using co-occurrence of objects and their relative locations and sizes as context information, we can successfully generate targeted mis-categorization attacks.
arXiv Detail & Related papers (2021-12-06T18:26:39Z) - IoU Attack: Towards Temporally Coherent Black-Box Adversarial Attack for
Visual Object Tracking [70.14487738649373]
Adrial attack arises due to the vulnerability of deep neural networks to perceive input samples injected with imperceptible perturbations.
We propose a decision-based black-box attack method for visual object tracking.
We validate the proposed IoU attack on state-of-the-art deep trackers.
arXiv Detail & Related papers (2021-03-27T16:20:32Z) - RPATTACK: Refined Patch Attack on General Object Detectors [31.28929190510979]
We propose a novel patch-based method for attacking general object detectors.
Our RPAttack can achieve an amazing missed detection rate of 100% for both Yolo v4 and Faster R-CNN.
arXiv Detail & Related papers (2021-03-23T11:45:41Z) - Fast Local Attack: Generating Local Adversarial Examples for Object
Detectors [38.813947369401525]
In our work, we leverage higher-level semantic information to generate high aggressive local perturbations for anchor-free object detectors.
As a result, it is less computationally intensive and achieves a higher black-box attack as well as transferring attack performance.
The adversarial examples generated by our method are not only capable of attacking anchor-free object detectors, but also able to be transferred to attack anchor-based object detector.
arXiv Detail & Related papers (2020-10-27T13:49:36Z) - DPAttack: Diffused Patch Attacks against Universal Object Detection [66.026630370248]
Adversarial attacks against object detection can be divided into two categories, whole-pixel attacks and patch attacks.
We propose a diffused patch attack (textbfDPAttack) to fool object detectors by diffused patches of asteroid-shaped or grid-shape.
Experiments show that our DPAttack can successfully fool most object detectors with diffused patches.
arXiv Detail & Related papers (2020-10-16T04:48:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.