We Can Always Catch You: Detecting Adversarial Patched Objects WITH or
WITHOUT Signature
- URL: http://arxiv.org/abs/2106.05261v2
- Date: Thu, 10 Jun 2021 07:38:23 GMT
- Title: We Can Always Catch You: Detecting Adversarial Patched Objects WITH or
WITHOUT Signature
- Authors: Bin Liang and Jiachun Li and Jianjun Huang
- Abstract summary: In this paper, we explore the detection problems about the adversarial patch attacks to the object detection.
A fast signature-based defense method is proposed and demonstrated to be effective.
The newly generated adversarial patches can successfully evade the proposed signature-based defense.
We present a novel signature-independent detection method based on the internal content semantics consistency.
- Score: 3.5272597442284104
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, the object detection based on deep learning has proven to be
vulnerable to adversarial patch attacks. The attackers holding a specially
crafted patch can hide themselves from the state-of-the-art person detectors,
e.g., YOLO, even in the physical world. This kind of attack can bring serious
security threats, such as escaping from surveillance cameras. In this paper, we
deeply explore the detection problems about the adversarial patch attacks to
the object detection. First, we identify a leverageable signature of existing
adversarial patches from the point of the visualization explanation. A fast
signature-based defense method is proposed and demonstrated to be effective.
Second, we design an improved patch generation algorithm to reveal the risk
that the signature-based way may be bypassed by the techniques emerging in the
future. The newly generated adversarial patches can successfully evade the
proposed signature-based defense. Finally, we present a novel
signature-independent detection method based on the internal content semantics
consistency rather than any attack-specific prior knowledge. The fundamental
intuition is that the adversarial object can appear locally but disappear
globally in an input image. The experiments demonstrate that the
signature-independent method can effectively detect the existing and improved
attacks. It has also proven to be a general method by detecting unforeseen and
even other types of attacks without any attack-specific prior knowledge. The
two proposed detection methods can be adopted in different scenarios, and we
believe that combining them can offer a comprehensive protection.
Related papers
- TPatch: A Triggered Physical Adversarial Patch [19.768494127237393]
We propose TPatch, a physical adversarial patch triggered by acoustic signals.
To avoid the suspicion of human drivers, we propose a content-based camouflage method and an attack enhancement method to strengthen it.
arXiv Detail & Related papers (2023-12-30T06:06:01Z) - Fight Fire with Fire: Combating Adversarial Patch Attacks using
Pattern-randomized Defensive Patches [12.947503245230866]
We propose a novel and general methodology for defending adversarial attacks.
We inject two types of defensive patches, canary and woodpecker, into the input to proactively probe or weaken potential adversarial patches.
The effectiveness and practicality of the proposed method are demonstrated through comprehensive experiments.
arXiv Detail & Related papers (2023-11-10T15:36:57Z) - X-Detect: Explainable Adversarial Patch Detection for Object Detectors
in Retail [38.10544338096162]
Existing methods for detecting adversarial attacks on object detectors have had difficulty detecting new real-life attacks.
We present X-Detect, a novel adversarial patch detector that can detect adversarial samples in real time.
X-Detect uses an ensemble of explainable-by-design detectors that utilize object extraction, scene manipulation, and feature transformation techniques.
arXiv Detail & Related papers (2023-06-14T10:35:21Z) - Understanding the Vulnerability of Skeleton-based Human Activity Recognition via Black-box Attack [53.032801921915436]
Human Activity Recognition (HAR) has been employed in a wide range of applications, e.g. self-driving cars.
Recently, the robustness of skeleton-based HAR methods have been questioned due to their vulnerability to adversarial attacks.
We show such threats exist, even when the attacker only has access to the input/output of the model.
We propose the very first black-box adversarial attack approach in skeleton-based HAR called BASAR.
arXiv Detail & Related papers (2022-11-21T09:51:28Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - Making DeepFakes more spurious: evading deep face forgery detection via
trace removal attack [16.221725939480084]
We present a detector-agnostic trace removal attack for DeepFake anti-forensics.
Instead of investigating the detector side, our attack looks into the original DeepFake creation pipeline.
Experiments show that the proposed attack can significantly compromise the detection accuracy of six state-of-the-art DeepFake detectors.
arXiv Detail & Related papers (2022-03-22T03:13:33Z) - ObjectSeeker: Certifiably Robust Object Detection against Patch Hiding
Attacks via Patch-agnostic Masking [95.6347501381882]
Object detectors are found to be vulnerable to physical-world patch hiding attacks.
We propose ObjectSeeker as a framework for building certifiably robust object detectors.
arXiv Detail & Related papers (2022-02-03T19:34:25Z) - Segment and Complete: Defending Object Detectors against Adversarial
Patch Attacks with Robust Patch Detection [142.24869736769432]
Adversarial patch attacks pose a serious threat to state-of-the-art object detectors.
We propose Segment and Complete defense (SAC), a framework for defending object detectors against patch attacks.
We show SAC can significantly reduce the targeted attack success rate of physical patch attacks.
arXiv Detail & Related papers (2021-12-08T19:18:48Z) - You Cannot Easily Catch Me: A Low-Detectable Adversarial Patch for
Object Detectors [12.946967210071032]
Adversarial patches can fool facial recognition systems, surveillance systems and self-driving cars.
Most existing adversarial patches can be outwitted, disabled and rejected by an adversarial patch detector.
We present a novel approach, a Low-Detectable Adversarial Patch, which attacks an object detector with texture-consistent adversarial patches.
arXiv Detail & Related papers (2021-09-30T14:47:29Z) - DPAttack: Diffused Patch Attacks against Universal Object Detection [66.026630370248]
Adversarial attacks against object detection can be divided into two categories, whole-pixel attacks and patch attacks.
We propose a diffused patch attack (textbfDPAttack) to fool object detectors by diffused patches of asteroid-shaped or grid-shape.
Experiments show that our DPAttack can successfully fool most object detectors with diffused patches.
arXiv Detail & Related papers (2020-10-16T04:48:24Z) - Certified Defenses for Adversarial Patches [72.65524549598126]
Adversarial patch attacks are among the most practical threat models against real-world computer vision systems.
This paper studies certified and empirical defenses against patch attacks.
arXiv Detail & Related papers (2020-03-14T19:57:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.