Dynamic Adversarial Patch for Evading Object Detection Models
- URL: http://arxiv.org/abs/2010.13070v1
- Date: Sun, 25 Oct 2020 08:55:40 GMT
- Title: Dynamic Adversarial Patch for Evading Object Detection Models
- Authors: Shahar Hoory and Tzvika Shapira and Asaf Shabtai and Yuval Elovici
- Abstract summary: We present an innovative attack method against object detectors applied in a real-world setup.
Our method uses dynamic adversarial patches which are placed at multiple predetermined locations on a target object.
We improved the attack by generating patches that consider the semantic distance between the target object and its classification.
- Score: 47.32228513808444
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent research shows that neural networks models used for computer vision
(e.g., YOLO and Fast R-CNN) are vulnerable to adversarial evasion attacks. Most
of the existing real-world adversarial attacks against object detectors use an
adversarial patch which is attached to the target object (e.g., a carefully
crafted sticker placed on a stop sign). This method may not be robust to
changes in the camera's location relative to the target object; in addition, it
may not work well when applied to nonplanar objects such as cars. In this
study, we present an innovative attack method against object detectors applied
in a real-world setup that addresses some of the limitations of existing
attacks. Our method uses dynamic adversarial patches which are placed at
multiple predetermined locations on a target object. An adversarial learning
algorithm is applied in order to generate the patches used. The dynamic attack
is implemented by switching between optimized patches dynamically, according to
the camera's position (i.e., the object detection system's position). In order
to demonstrate our attack in a real-world setup, we implemented the patches by
attaching flat screens to the target object; the screens are used to present
the patches and switch between them, depending on the current camera location.
Thus, the attack is dynamic and adjusts itself to the situation to achieve
optimal results. We evaluated our dynamic patch approach by attacking the
YOLOv2 object detector with a car as the target object and succeeded in
misleading it in up to 90% of the video frames when filming the car from a wide
viewing angle range. We improved the attack by generating patches that consider
the semantic distance between the target object and its classification. We also
examined the attack's transferability among different car models and were able
to mislead the detector 71% of the time.
Related papers
- Transient Adversarial 3D Projection Attacks on Object Detection in Autonomous Driving [15.516055760190884]
We introduce an adversarial 3D projection attack specifically targeting object detection in autonomous driving scenarios.
Our results demonstrate the effectiveness of the proposed attack in deceiving YOLOv3 and Mask R-CNN in physical settings.
arXiv Detail & Related papers (2024-09-25T22:27:11Z) - Object-fabrication Targeted Attack for Object Detection [54.10697546734503]
adversarial attack for object detection contains targeted attack and untargeted attack.
New object-fabrication targeted attack mode can mislead detectors tofabricate extra false objects with specific target labels.
arXiv Detail & Related papers (2022-12-13T08:42:39Z) - Attacking Object Detector Using A Universal Targeted Label-Switch Patch [44.44676276867374]
Adversarial attacks against deep learning-based object detectors (ODs) have been studied extensively in the past few years.
None of prior research proposed a misclassification attack on ODs, in which the patch is applied on the target object.
We propose a novel, universal, targeted, label-switch attack against the state-of-the-art object detector, YOLO.
arXiv Detail & Related papers (2022-11-16T12:08:58Z) - ObjectSeeker: Certifiably Robust Object Detection against Patch Hiding
Attacks via Patch-agnostic Masking [95.6347501381882]
Object detectors are found to be vulnerable to physical-world patch hiding attacks.
We propose ObjectSeeker as a framework for building certifiably robust object detectors.
arXiv Detail & Related papers (2022-02-03T19:34:25Z) - Segment and Complete: Defending Object Detectors against Adversarial
Patch Attacks with Robust Patch Detection [142.24869736769432]
Adversarial patch attacks pose a serious threat to state-of-the-art object detectors.
We propose Segment and Complete defense (SAC), a framework for defending object detectors against patch attacks.
We show SAC can significantly reduce the targeted attack success rate of physical patch attacks.
arXiv Detail & Related papers (2021-12-08T19:18:48Z) - You Cannot Easily Catch Me: A Low-Detectable Adversarial Patch for
Object Detectors [12.946967210071032]
Adversarial patches can fool facial recognition systems, surveillance systems and self-driving cars.
Most existing adversarial patches can be outwitted, disabled and rejected by an adversarial patch detector.
We present a novel approach, a Low-Detectable Adversarial Patch, which attacks an object detector with texture-consistent adversarial patches.
arXiv Detail & Related papers (2021-09-30T14:47:29Z) - The Translucent Patch: A Physical and Universal Attack on Object
Detectors [48.31712758860241]
We propose a contactless physical patch to fool state-of-the-art object detectors.
The primary goal of our patch is to hide all instances of a selected target class.
We show that our patch was able to prevent the detection of 42.27% of all stop sign instances.
arXiv Detail & Related papers (2020-12-23T07:47:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.