Segment and Complete: Defending Object Detectors against Adversarial
Patch Attacks with Robust Patch Detection
- URL: http://arxiv.org/abs/2112.04532v1
- Date: Wed, 8 Dec 2021 19:18:48 GMT
- Title: Segment and Complete: Defending Object Detectors against Adversarial
Patch Attacks with Robust Patch Detection
- Authors: Jiang Liu, Alexander Levine, Chun Pong Lau, Rama Chellappa, Soheil
Feizi
- Abstract summary: Adversarial patch attacks pose a serious threat to state-of-the-art object detectors.
We propose Segment and Complete defense (SAC), a framework for defending object detectors against patch attacks.
We show SAC can significantly reduce the targeted attack success rate of physical patch attacks.
- Score: 142.24869736769432
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Object detection plays a key role in many security-critical systems.
Adversarial patch attacks, which are easy to implement in the physical world,
pose a serious threat to state-of-the-art object detectors. Developing reliable
defenses for object detectors against patch attacks is critical but severely
understudied. In this paper, we propose Segment and Complete defense (SAC), a
general framework for defending object detectors against patch attacks through
detecting and removing adversarial patches. We first train a patch segmenter
that outputs patch masks that provide pixel-level localization of adversarial
patches. We then propose a self adversarial training algorithm to robustify the
patch segmenter. In addition, we design a robust shape completion algorithm,
which is guaranteed to remove the entire patch from the images given the
outputs of the patch segmenter are within a certain Hamming distance of the
ground-truth patch masks. Our experiments on COCO and xView datasets
demonstrate that SAC achieves superior robustness even under strong adaptive
attacks with no performance drop on clean images, and generalizes well to
unseen patch shapes, attack budgets, and unseen attack methods. Furthermore, we
present the APRICOT-Mask dataset, which augments the APRICOT dataset with
pixel-level annotations of adversarial patches. We show SAC can significantly
reduce the targeted attack success rate of physical patch attacks.
Related papers
- Task-agnostic Defense against Adversarial Patch Attacks [25.15948648034204]
Adversarial patch attacks mislead neural networks by injecting adversarial pixels within a designated local region.
We present PatchZero, a task-agnostic defense against white-box adversarial patches.
Our method achieves SOTA robust accuracy without any degradation in the benign performance.
arXiv Detail & Related papers (2022-07-05T03:49:08Z) - ObjectSeeker: Certifiably Robust Object Detection against Patch Hiding
Attacks via Patch-agnostic Masking [95.6347501381882]
Object detectors are found to be vulnerable to physical-world patch hiding attacks.
We propose ObjectSeeker as a framework for building certifiably robust object detectors.
arXiv Detail & Related papers (2022-02-03T19:34:25Z) - PatchGuard++: Efficient Provable Attack Detection against Adversarial
Patches [28.94435153159868]
An adversarial patch can arbitrarily manipulate image pixels within a restricted region to induce model misclassification.
Recent provably robust defenses generally follow the PatchGuard framework by using CNNs with small receptive fields.
We extend PatchGuard to PatchGuard++ for provably detecting the adversarial patch attack to boost both provable robust accuracy and clean accuracy.
arXiv Detail & Related papers (2021-04-26T14:22:33Z) - RPATTACK: Refined Patch Attack on General Object Detectors [31.28929190510979]
We propose a novel patch-based method for attacking general object detectors.
Our RPAttack can achieve an amazing missed detection rate of 100% for both Yolo v4 and Faster R-CNN.
arXiv Detail & Related papers (2021-03-23T11:45:41Z) - The Translucent Patch: A Physical and Universal Attack on Object
Detectors [48.31712758860241]
We propose a contactless physical patch to fool state-of-the-art object detectors.
The primary goal of our patch is to hide all instances of a selected target class.
We show that our patch was able to prevent the detection of 42.27% of all stop sign instances.
arXiv Detail & Related papers (2020-12-23T07:47:13Z) - DPAttack: Diffused Patch Attacks against Universal Object Detection [66.026630370248]
Adversarial attacks against object detection can be divided into two categories, whole-pixel attacks and patch attacks.
We propose a diffused patch attack (textbfDPAttack) to fool object detectors by diffused patches of asteroid-shaped or grid-shape.
Experiments show that our DPAttack can successfully fool most object detectors with diffused patches.
arXiv Detail & Related papers (2020-10-16T04:48:24Z) - PatchGuard: A Provably Robust Defense against Adversarial Patches via
Small Receptive Fields and Masking [46.03749650789915]
Localized adversarial patches aim to induce misclassification in machine learning models by arbitrarily modifying pixels within a restricted region of an image.
We propose a general defense framework called PatchGuard that can achieve high provable robustness while maintaining high clean accuracy against localized adversarial patches.
arXiv Detail & Related papers (2020-05-17T03:38:34Z) - Certified Defenses for Adversarial Patches [72.65524549598126]
Adversarial patch attacks are among the most practical threat models against real-world computer vision systems.
This paper studies certified and empirical defenses against patch attacks.
arXiv Detail & Related papers (2020-03-14T19:57:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.