Defending Object Detectors against Patch Attacks with Out-of-Distribution Smoothing
- URL: http://arxiv.org/abs/2205.08989v2
- Date: Fri, 06 Dec 2024 03:25:10 GMT
- Title: Defending Object Detectors against Patch Attacks with Out-of-Distribution Smoothing
- Authors: Ryan Feng, Neal Mangaokar, Jihye Choi, Somesh Jha, Atul Prakash,
- Abstract summary: We introduce OODSmoother, which characterizes the properties of approaches that aim to remove adversarial patches.
This framework naturally guides us to design 1) a novel adaptive attack that breaks existing patch attack defenses on object detectors, and 2) a novel defense approach SemPrior that takes advantage of semantic priors.
We find that SemPrior alone provides up to a 40% gain, or up to a 60% gain when combined with existing defenses.
- Score: 21.174037250133622
- License:
- Abstract: Patch attacks against object detectors have been of recent interest due to their being physically realizable and more closely aligned with practical systems. In response to this threat, many new defenses have been proposed that train a patch segmenter model to detect and remove the patch before the image is passed to the downstream model. We unify these approaches with a flexible framework, OODSmoother, which characterizes the properties of approaches that aim to remove adversarial patches. This framework naturally guides us to design 1) a novel adaptive attack that breaks existing patch attack defenses on object detectors, and 2) a novel defense approach SemPrior that takes advantage of semantic priors. Our key insight behind SemPrior is that the existing machine learning-based patch detectors struggle to learn semantic priors and that explicitly incorporating them can improve performance. We find that SemPrior alone provides up to a 40% gain, or up to a 60% gain when combined with existing defenses.
Related papers
- PAD: Patch-Agnostic Defense against Adversarial Patch Attacks [36.865204327754626]
Adversarial patch attacks present a significant threat to real-world object detectors.
We show two inherent characteristics of adversarial patches, semantic independence and spatial heterogeneity.
We propose PAD, a novel adversarial patch localization and removal method that does not require prior knowledge or additional training.
arXiv Detail & Related papers (2024-04-25T09:32:34Z) - BadCLIP: Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive
Learning [85.2564206440109]
This paper reveals the threats in this practical scenario that backdoor attacks can remain effective even after defenses.
We introduce the emphtoolns attack, which is resistant to backdoor detection and model fine-tuning defenses.
arXiv Detail & Related papers (2023-11-20T02:21:49Z) - Defending Against Person Hiding Adversarial Patch Attack with a
Universal White Frame [28.128458352103543]
High-performance object detection networks are vulnerable to adversarial patch attacks.
Person-hiding attacks are emerging as a serious problem in many safety-critical applications.
We propose a novel defense strategy that mitigates a person-hiding attack by optimizing defense patterns.
arXiv Detail & Related papers (2022-04-27T15:18:08Z) - ObjectSeeker: Certifiably Robust Object Detection against Patch Hiding
Attacks via Patch-agnostic Masking [95.6347501381882]
Object detectors are found to be vulnerable to physical-world patch hiding attacks.
We propose ObjectSeeker as a framework for building certifiably robust object detectors.
arXiv Detail & Related papers (2022-02-03T19:34:25Z) - Segment and Complete: Defending Object Detectors against Adversarial
Patch Attacks with Robust Patch Detection [142.24869736769432]
Adversarial patch attacks pose a serious threat to state-of-the-art object detectors.
We propose Segment and Complete defense (SAC), a framework for defending object detectors against patch attacks.
We show SAC can significantly reduce the targeted attack success rate of physical patch attacks.
arXiv Detail & Related papers (2021-12-08T19:18:48Z) - We Can Always Catch You: Detecting Adversarial Patched Objects WITH or
WITHOUT Signature [3.5272597442284104]
In this paper, we explore the detection problems about the adversarial patch attacks to the object detection.
A fast signature-based defense method is proposed and demonstrated to be effective.
The newly generated adversarial patches can successfully evade the proposed signature-based defense.
We present a novel signature-independent detection method based on the internal content semantics consistency.
arXiv Detail & Related papers (2021-06-09T17:58:08Z) - The Translucent Patch: A Physical and Universal Attack on Object
Detectors [48.31712758860241]
We propose a contactless physical patch to fool state-of-the-art object detectors.
The primary goal of our patch is to hide all instances of a selected target class.
We show that our patch was able to prevent the detection of 42.27% of all stop sign instances.
arXiv Detail & Related papers (2020-12-23T07:47:13Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z) - Certified Defenses for Adversarial Patches [72.65524549598126]
Adversarial patch attacks are among the most practical threat models against real-world computer vision systems.
This paper studies certified and empirical defenses against patch attacks.
arXiv Detail & Related papers (2020-03-14T19:57:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.