PAD: Patch-Agnostic Defense against Adversarial Patch Attacks
- URL: http://arxiv.org/abs/2404.16452v1
- Date: Thu, 25 Apr 2024 09:32:34 GMT
- Title: PAD: Patch-Agnostic Defense against Adversarial Patch Attacks
- Authors: Lihua Jing, Rui Wang, Wenqi Ren, Xin Dong, Cong Zou,
- Abstract summary: Adversarial patch attacks present a significant threat to real-world object detectors.
We show two inherent characteristics of adversarial patches, semantic independence and spatial heterogeneity.
We propose PAD, a novel adversarial patch localization and removal method that does not require prior knowledge or additional training.
- Score: 36.865204327754626
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial patch attacks present a significant threat to real-world object detectors due to their practical feasibility. Existing defense methods, which rely on attack data or prior knowledge, struggle to effectively address a wide range of adversarial patches. In this paper, we show two inherent characteristics of adversarial patches, semantic independence and spatial heterogeneity, independent of their appearance, shape, size, quantity, and location. Semantic independence indicates that adversarial patches operate autonomously within their semantic context, while spatial heterogeneity manifests as distinct image quality of the patch area that differs from original clean image due to the independent generation process. Based on these observations, we propose PAD, a novel adversarial patch localization and removal method that does not require prior knowledge or additional training. PAD offers patch-agnostic defense against various adversarial patches, compatible with any pre-trained object detectors. Our comprehensive digital and physical experiments involving diverse patch types, such as localized noise, printable, and naturalistic patches, exhibit notable improvements over state-of-the-art works. Our code is available at https://github.com/Lihua-Jing/PAD.
Related papers
- DiffPatch: Generating Customizable Adversarial Patches using Diffusion Model [88.14122962946858]
We propose a novel diffusion-based customizable patch generation framework termed DiffPatch.
Our approach enables users to utilize a reference image as the source, rather than starting from random noise.
We have created a physical adversarial T-shirt dataset, AdvPatch-1K, specifically targeting YOLOv5s.
arXiv Detail & Related papers (2024-12-02T12:30:35Z) - Defending Adversarial Patches via Joint Region Localizing and Inpainting [16.226410937026685]
A series of experiments versus traffic sign classification and detection tasks are conducted to defend against various adversarial patch attacks.
We propose a novel defense method based on a localizing and inpainting" mechanism to pre-process the input examples.
arXiv Detail & Related papers (2023-07-26T15:11:51Z) - Architecture-agnostic Iterative Black-box Certified Defense against
Adversarial Patches [18.61334396999853]
adversarial patch attack poses threat to computer vision systems.
State-of-the-art certified defenses can be compatible with any model architecture.
We propose a novel two-stage Iterative Black-box Certified Defense method, termed IBCD.
arXiv Detail & Related papers (2023-05-18T12:43:04Z) - Task-agnostic Defense against Adversarial Patch Attacks [25.15948648034204]
Adversarial patch attacks mislead neural networks by injecting adversarial pixels within a designated local region.
We present PatchZero, a task-agnostic defense against white-box adversarial patches.
Our method achieves SOTA robust accuracy without any degradation in the benign performance.
arXiv Detail & Related papers (2022-07-05T03:49:08Z) - Defending Object Detectors against Patch Attacks with Out-of-Distribution Smoothing [21.174037250133622]
We introduce OODSmoother, which characterizes the properties of approaches that aim to remove adversarial patches.
This framework naturally guides us to design 1) a novel adaptive attack that breaks existing patch attack defenses on object detectors, and 2) a novel defense approach SemPrior that takes advantage of semantic priors.
We find that SemPrior alone provides up to a 40% gain, or up to a 60% gain when combined with existing defenses.
arXiv Detail & Related papers (2022-05-18T15:20:18Z) - Segment and Complete: Defending Object Detectors against Adversarial
Patch Attacks with Robust Patch Detection [142.24869736769432]
Adversarial patch attacks pose a serious threat to state-of-the-art object detectors.
We propose Segment and Complete defense (SAC), a framework for defending object detectors against patch attacks.
We show SAC can significantly reduce the targeted attack success rate of physical patch attacks.
arXiv Detail & Related papers (2021-12-08T19:18:48Z) - Bias-based Universal Adversarial Patch Attack for Automatic Check-out [59.355948824578434]
Adversarial examples are inputs with imperceptible perturbations that easily misleading deep neural networks(DNNs)
Existing strategies failed to generate adversarial patches with strong generalization ability.
This paper proposes a bias-based framework to generate class-agnostic universal adversarial patches with strong generalization ability.
arXiv Detail & Related papers (2020-05-19T07:38:54Z) - Adversarial Training against Location-Optimized Adversarial Patches [84.96938953835249]
adversarial patches: clearly visible, but adversarially crafted rectangular patches in images.
We first devise a practical approach to obtain adversarial patches while actively optimizing their location within the image.
We apply adversarial training on these location-optimized adversarial patches and demonstrate significantly improved robustness on CIFAR10 and GTSRB.
arXiv Detail & Related papers (2020-05-05T16:17:00Z) - Certified Defenses for Adversarial Patches [72.65524549598126]
Adversarial patch attacks are among the most practical threat models against real-world computer vision systems.
This paper studies certified and empirical defenses against patch attacks.
arXiv Detail & Related papers (2020-03-14T19:57:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.