Efficient Certified Defenses Against Patch Attacks on Image Classifiers
- URL: http://arxiv.org/abs/2102.04154v1
- Date: Mon, 8 Feb 2021 12:11:41 GMT
- Title: Efficient Certified Defenses Against Patch Attacks on Image Classifiers
- Authors: Jan Hendrik Metzen, Maksym Yatsura
- Abstract summary: BagCert is a novel combination of model architecture and certification procedure that allows efficient certification.
On CIFAR10, BagCert certifies examples in 43 seconds on a single GPU and obtains 86% clean and 60% certified accuracy against 5x5 patches.
- Score: 13.858624044986815
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial patches pose a realistic threat model for physical world attacks
on autonomous systems via their perception component. Autonomous systems in
safety-critical domains such as automated driving should thus contain a
fail-safe fallback component that combines certifiable robustness against
patches with efficient inference while maintaining high performance on clean
inputs. We propose BagCert, a novel combination of model architecture and
certification procedure that allows efficient certification. We derive a loss
that enables end-to-end optimization of certified robustness against patches of
different sizes and locations. On CIFAR10, BagCert certifies 10.000 examples in
43 seconds on a single GPU and obtains 86% clean and 60% certified accuracy
against 5x5 patches.
Related papers
- PatchCURE: Improving Certifiable Robustness, Model Utility, and Computation Efficiency of Adversarial Patch Defenses [46.098482151215556]
State-of-the-art defenses against adversarial patch attacks can now achieve strong certifiable robustness with a marginal drop in model utility.
This impressive performance typically comes at the cost of 10-100x more inference-time computation compared to undefended models.
We propose a defense framework named PatchCURE to approach this trade-off problem.
arXiv Detail & Related papers (2023-10-19T18:14:33Z) - Architecture-agnostic Iterative Black-box Certified Defense against
Adversarial Patches [18.61334396999853]
adversarial patch attack poses threat to computer vision systems.
State-of-the-art certified defenses can be compatible with any model architecture.
We propose a novel two-stage Iterative Black-box Certified Defense method, termed IBCD.
arXiv Detail & Related papers (2023-05-18T12:43:04Z) - DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified
Robustness [58.23214712926585]
We develop a certified defense, DRSM (De-Randomized Smoothed MalConv), by redesigning the de-randomized smoothing technique for the domain of malware detection.
Specifically, we propose a window ablation scheme to provably limit the impact of adversarial bytes while maximally preserving local structures of the executables.
We are the first to offer certified robustness in the realm of static detection of malware executables.
arXiv Detail & Related papers (2023-03-20T17:25:22Z) - PointCert: Point Cloud Classification with Deterministic Certified
Robustness Guarantees [63.85677512968049]
Point cloud classification is an essential component in many security-critical applications such as autonomous driving and augmented reality.
Existing certified defenses against adversarial point clouds suffer from a key limitation: their certified robustness guarantees are probabilistic.
We propose a general framework, namely PointCert, that can transform an arbitrary point cloud classifier to be certifiably robust against adversarial point clouds.
arXiv Detail & Related papers (2023-03-03T14:32:48Z) - Towards Practical Certifiable Patch Defense with Vision Transformer [34.00374565048962]
We introduce Vision Transformer (ViT) into the framework of Derandomized Smoothing (DS)
For efficient inference and deployment in the real world, we innovatively reconstruct the global self-attention structure of the original ViT into isolated band unit self-attention.
arXiv Detail & Related papers (2022-03-16T10:39:18Z) - Segment and Complete: Defending Object Detectors against Adversarial
Patch Attacks with Robust Patch Detection [142.24869736769432]
Adversarial patch attacks pose a serious threat to state-of-the-art object detectors.
We propose Segment and Complete defense (SAC), a framework for defending object detectors against patch attacks.
We show SAC can significantly reduce the targeted attack success rate of physical patch attacks.
arXiv Detail & Related papers (2021-12-08T19:18:48Z) - PatchCensor: Patch Robustness Certification for Transformers via
Exhaustive Testing [7.88628640954152]
Vision Transformer (ViT) is known to be highly nonlinear like other classical neural networks and could be easily fooled by both natural and adversarial patch perturbations.
This limitation could pose a threat to the deployment of ViT in the real industrial environment, especially in safety-critical scenarios.
We propose PatchCensor, aiming to certify the patch robustness of ViT by applying exhaustive testing.
arXiv Detail & Related papers (2021-11-19T23:45:23Z) - PatchGuard: A Provably Robust Defense against Adversarial Patches via
Small Receptive Fields and Masking [46.03749650789915]
Localized adversarial patches aim to induce misclassification in machine learning models by arbitrarily modifying pixels within a restricted region of an image.
We propose a general defense framework called PatchGuard that can achieve high provable robustness while maintaining high clean accuracy against localized adversarial patches.
arXiv Detail & Related papers (2020-05-17T03:38:34Z) - (De)Randomized Smoothing for Certifiable Defense against Patch Attacks [136.79415677706612]
We introduce a certifiable defense against patch attacks that guarantees for a given image and patch attack size.
Our method is related to the broad class of randomized smoothing robustness schemes.
Our results effectively establish a new state-of-the-art of certifiable defense against patch attacks on CIFAR-10 and ImageNet.
arXiv Detail & Related papers (2020-02-25T08:39:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.