You Cannot Easily Catch Me: A Low-Detectable Adversarial Patch for
Object Detectors
- URL: http://arxiv.org/abs/2109.15177v1
- Date: Thu, 30 Sep 2021 14:47:29 GMT
- Title: You Cannot Easily Catch Me: A Low-Detectable Adversarial Patch for
Object Detectors
- Authors: Zijian Zhu, Hang Su, Chang Liu, Wenzhao Xiang and Shibao Zheng
- Abstract summary: Adversarial patches can fool facial recognition systems, surveillance systems and self-driving cars.
Most existing adversarial patches can be outwitted, disabled and rejected by an adversarial patch detector.
We present a novel approach, a Low-Detectable Adversarial Patch, which attacks an object detector with texture-consistent adversarial patches.
- Score: 12.946967210071032
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Blind spots or outright deceit can bedevil and deceive machine learning
models. Unidentified objects such as digital "stickers," also known as
adversarial patches, can fool facial recognition systems, surveillance systems
and self-driving cars. Fortunately, most existing adversarial patches can be
outwitted, disabled and rejected by a simple classification network called an
adversarial patch detector, which distinguishes adversarial patches from
original images. An object detector classifies and predicts the types of
objects within an image, such as by distinguishing a motorcyclist from the
motorcycle, while also localizing each object's placement within the image by
"drawing" so-called bounding boxes around each object, once again separating
the motorcyclist from the motorcycle. To train detectors even better, however,
we need to keep subjecting them to confusing or deceitful adversarial patches
as we probe for the models' blind spots. For such probes, we came up with a
novel approach, a Low-Detectable Adversarial Patch, which attacks an object
detector with small and texture-consistent adversarial patches, making these
adversaries less likely to be recognized. Concretely, we use several geometric
primitives to model the shapes and positions of the patches. To enhance our
attack performance, we also assign different weights to the bounding boxes in
terms of loss function. Our experiments on the common detection dataset COCO as
well as the driving-video dataset D2-City show that LDAP is an effective attack
method, and can resist the adversarial patch detector.
Related papers
- TPatch: A Triggered Physical Adversarial Patch [19.768494127237393]
We propose TPatch, a physical adversarial patch triggered by acoustic signals.
To avoid the suspicion of human drivers, we propose a content-based camouflage method and an attack enhancement method to strengthen it.
arXiv Detail & Related papers (2023-12-30T06:06:01Z) - The Weaknesses of Adversarial Camouflage in Overhead Imagery [7.724233098666892]
We build a library of 24 adversarial patches to disguise four different object classes: bus, car, truck, van.
We show that while adversarial patches may fool object detectors, the presence of such patches is often easily uncovered.
This raises the question of whether such patches truly constitute camouflage.
arXiv Detail & Related papers (2022-07-06T20:39:21Z) - ObjectSeeker: Certifiably Robust Object Detection against Patch Hiding
Attacks via Patch-agnostic Masking [95.6347501381882]
Object detectors are found to be vulnerable to physical-world patch hiding attacks.
We propose ObjectSeeker as a framework for building certifiably robust object detectors.
arXiv Detail & Related papers (2022-02-03T19:34:25Z) - Segment and Complete: Defending Object Detectors against Adversarial
Patch Attacks with Robust Patch Detection [142.24869736769432]
Adversarial patch attacks pose a serious threat to state-of-the-art object detectors.
We propose Segment and Complete defense (SAC), a framework for defending object detectors against patch attacks.
We show SAC can significantly reduce the targeted attack success rate of physical patch attacks.
arXiv Detail & Related papers (2021-12-08T19:18:48Z) - Inconspicuous Adversarial Patches for Fooling Image Recognition Systems
on Mobile Devices [8.437172062224034]
A variant of adversarial examples, called adversarial patch, draws researchers' attention due to its strong attack abilities.
We propose an approach to generate adversarial patches with one single image.
Our approach shows the strong attack abilities in white-box settings and the excellent transferability in black-box settings.
arXiv Detail & Related papers (2021-06-29T09:39:34Z) - Learning Transferable 3D Adversarial Cloaks for Deep Trained Detectors [72.7633556669675]
This paper presents a novel patch-based adversarial attack pipeline that trains adversarial patches on 3D human meshes.
Unlike existing adversarial patches, our new 3D adversarial patch is shown to fool state-of-the-art deep object detectors robustly under varying views.
arXiv Detail & Related papers (2021-04-22T14:36:08Z) - The Translucent Patch: A Physical and Universal Attack on Object
Detectors [48.31712758860241]
We propose a contactless physical patch to fool state-of-the-art object detectors.
The primary goal of our patch is to hide all instances of a selected target class.
We show that our patch was able to prevent the detection of 42.27% of all stop sign instances.
arXiv Detail & Related papers (2020-12-23T07:47:13Z) - Dynamic Adversarial Patch for Evading Object Detection Models [47.32228513808444]
We present an innovative attack method against object detectors applied in a real-world setup.
Our method uses dynamic adversarial patches which are placed at multiple predetermined locations on a target object.
We improved the attack by generating patches that consider the semantic distance between the target object and its classification.
arXiv Detail & Related papers (2020-10-25T08:55:40Z) - DPAttack: Diffused Patch Attacks against Universal Object Detection [66.026630370248]
Adversarial attacks against object detection can be divided into two categories, whole-pixel attacks and patch attacks.
We propose a diffused patch attack (textbfDPAttack) to fool object detectors by diffused patches of asteroid-shaped or grid-shape.
Experiments show that our DPAttack can successfully fool most object detectors with diffused patches.
arXiv Detail & Related papers (2020-10-16T04:48:24Z) - Adversarial Patch Camouflage against Aerial Detection [2.3268622345249796]
Detection of military assets on the ground can be performed by applying deep learning-based object detectors on drone surveillance footage.
In this work, we apply patch-based adversarial attacks for the use case of unmanned aerial surveillance.
Our results show that adversarial patch attacks form a realistic alternative to traditional camouflage activities.
arXiv Detail & Related papers (2020-08-31T15:21:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.