Attacking Object Detector Using A Universal Targeted Label-Switch Patch
- URL: http://arxiv.org/abs/2211.08859v1
- Date: Wed, 16 Nov 2022 12:08:58 GMT
- Title: Attacking Object Detector Using A Universal Targeted Label-Switch Patch
- Authors: Avishag Shapira, Ron Bitton, Dan Avraham, Alon Zolfi, Yuval Elovici,
Asaf Shabtai
- Abstract summary: Adversarial attacks against deep learning-based object detectors (ODs) have been studied extensively in the past few years.
None of prior research proposed a misclassification attack on ODs, in which the patch is applied on the target object.
We propose a novel, universal, targeted, label-switch attack against the state-of-the-art object detector, YOLO.
- Score: 44.44676276867374
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial attacks against deep learning-based object detectors (ODs) have
been studied extensively in the past few years. These attacks cause the model
to make incorrect predictions by placing a patch containing an adversarial
pattern on the target object or anywhere within the frame. However, none of
prior research proposed a misclassification attack on ODs, in which the patch
is applied on the target object. In this study, we propose a novel, universal,
targeted, label-switch attack against the state-of-the-art object detector,
YOLO. In our attack, we use (i) a tailored projection function to enable the
placement of the adversarial patch on multiple target objects in the image
(e.g., cars), each of which may be located a different distance away from the
camera or have a different view angle relative to the camera, and (ii) a unique
loss function capable of changing the label of the attacked objects. The
proposed universal patch, which is trained in the digital domain, is
transferable to the physical domain. We performed an extensive evaluation using
different types of object detectors, different video streams captured by
different cameras, and various target classes, and evaluated different
configurations of the adversarial patch in the physical domain.
Related papers
- Object-fabrication Targeted Attack for Object Detection [54.10697546734503]
adversarial attack for object detection contains targeted attack and untargeted attack.
New object-fabrication targeted attack mode can mislead detectors tofabricate extra false objects with specific target labels.
arXiv Detail & Related papers (2022-12-13T08:42:39Z) - BadDet: Backdoor Attacks on Object Detection [42.40418007499009]
We propose four kinds of backdoor attacks for object detection task.
A trigger can falsely generate an object of the target class.
A single trigger can change the predictions of all objects in an image to the target class.
arXiv Detail & Related papers (2022-05-28T18:02:11Z) - ObjectSeeker: Certifiably Robust Object Detection against Patch Hiding
Attacks via Patch-agnostic Masking [95.6347501381882]
Object detectors are found to be vulnerable to physical-world patch hiding attacks.
We propose ObjectSeeker as a framework for building certifiably robust object detectors.
arXiv Detail & Related papers (2022-02-03T19:34:25Z) - Context-Aware Transfer Attacks for Object Detection [51.65308857232767]
We present a new approach to generate context-aware attacks for object detectors.
We show that by using co-occurrence of objects and their relative locations and sizes as context information, we can successfully generate targeted mis-categorization attacks.
arXiv Detail & Related papers (2021-12-06T18:26:39Z) - DPA: Learning Robust Physical Adversarial Camouflages for Object
Detectors [5.598600329573922]
We propose the Dense Proposals Attack (DPA) to learn robust, physical and targeted adversarial camouflages for detectors.
The camouflages are robust because they remain adversarial when filmed under arbitrary viewpoint and different illumination conditions.
We build a virtual 3D scene using the Unity simulation engine to fairly and reproducibly evaluate different physical attacks.
arXiv Detail & Related papers (2021-09-01T00:18:17Z) - The Translucent Patch: A Physical and Universal Attack on Object
Detectors [48.31712758860241]
We propose a contactless physical patch to fool state-of-the-art object detectors.
The primary goal of our patch is to hide all instances of a selected target class.
We show that our patch was able to prevent the detection of 42.27% of all stop sign instances.
arXiv Detail & Related papers (2020-12-23T07:47:13Z) - Dynamic Adversarial Patch for Evading Object Detection Models [47.32228513808444]
We present an innovative attack method against object detectors applied in a real-world setup.
Our method uses dynamic adversarial patches which are placed at multiple predetermined locations on a target object.
We improved the attack by generating patches that consider the semantic distance between the target object and its classification.
arXiv Detail & Related papers (2020-10-25T08:55:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.