Revisiting Image Classifier Training for Improved Certified Robust
Defense against Adversarial Patches
- URL: http://arxiv.org/abs/2306.12610v1
- Date: Thu, 22 Jun 2023 00:13:44 GMT
- Title: Revisiting Image Classifier Training for Improved Certified Robust
Defense against Adversarial Patches
- Authors: Aniruddha Saha, Shuhua Yu, Arash Norouzzadeh, Wan-Yi Lin, Chaithanya
Kumar Mummadi
- Abstract summary: We propose a two-round greedy masking strategy (Greedy Cutout) which finds an approximate worst-case mask location with much less compute.
We show that the models trained with our Greedy Cutout improves certified robust accuracy over Random Cutout in PatchCleanser across a range of datasets.
- Score: 7.90470727433401
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Certifiably robust defenses against adversarial patches for image classifiers
ensure correct prediction against any changes to a constrained neighborhood of
pixels. PatchCleanser arXiv:2108.09135 [cs.CV], the state-of-the-art certified
defense, uses a double-masking strategy for robust classification. The success
of this strategy relies heavily on the model's invariance to image pixel
masking. In this paper, we take a closer look at model training schemes to
improve this invariance. Instead of using Random Cutout arXiv:1708.04552v2
[cs.CV] augmentations like PatchCleanser, we introduce the notion of worst-case
masking, i.e., selecting masked images which maximize classification loss.
However, finding worst-case masks requires an exhaustive search, which might be
prohibitively expensive to do on-the-fly during training. To solve this
problem, we propose a two-round greedy masking strategy (Greedy Cutout) which
finds an approximate worst-case mask location with much less compute. We show
that the models trained with our Greedy Cutout improves certified robust
accuracy over Random Cutout in PatchCleanser across a range of datasets and
architectures. Certified robust accuracy on ImageNet with a ViT-B16-224 model
increases from 58.1\% to 62.3\% against a 3\% square patch applied anywhere on
the image.
Related papers
- Learning Mask-aware CLIP Representations for Zero-Shot Segmentation [120.97144647340588]
Mask-awareProposals CLIP (IP-CLIP) is proposed to handle arbitrary numbers of image and mask proposals simultaneously.
mask-aware loss and self-distillation loss are designed to fine-tune IP-CLIP, ensuring CLIP is responsive to different mask proposals.
We conduct extensive experiments on the popular zero-shot benchmarks.
arXiv Detail & Related papers (2023-09-30T03:27:31Z) - DPPMask: Masked Image Modeling with Determinantal Point Processes [49.65141962357528]
Masked Image Modeling (MIM) has achieved impressive representative performance with the aim of reconstructing randomly masked images.
We show that uniformly random masking widely used in previous works unavoidably loses some key objects and changes original semantic information.
To address this issue, we augment MIM with a new masking strategy namely the DPPMask.
Our method is simple yet effective and requires no extra learnable parameters when implemented within various frameworks.
arXiv Detail & Related papers (2023-03-13T13:40:39Z) - Improving Masked Autoencoders by Learning Where to Mask [65.89510231743692]
Masked image modeling is a promising self-supervised learning method for visual data.
We present AutoMAE, a framework that uses Gumbel-Softmax to interlink an adversarially-trained mask generator and a mask-guided image modeling process.
In our experiments, AutoMAE is shown to provide effective pretraining models on standard self-supervised benchmarks and downstream tasks.
arXiv Detail & Related papers (2023-03-12T05:28:55Z) - Certified Defences Against Adversarial Patch Attacks on Semantic
Segmentation [44.13336566131961]
We present Demasked Smoothing, the first approach to certify the robustness of semantic segmentation models against patch attacks.
Using different masking strategies, Demasked Smoothing can be applied both for certified detection and certified recovery.
In extensive experiments we show that Demasked Smoothing can on average certify 64% of the pixel predictions for a 1% patch in the detection task and 48% against a 0.5% patch for the recovery task on the ADE20K dataset.
arXiv Detail & Related papers (2022-09-13T13:24:22Z) - BEiT v2: Masked Image Modeling with Vector-Quantized Visual Tokenizers [117.79456335844439]
We propose to use a semantic-rich visual tokenizer as the reconstruction target for masked prediction.
We then pretrain vision Transformers by predicting the original visual tokens for the masked image patches.
Experiments on image classification and semantic segmentation show that our approach outperforms all compared MIM methods.
arXiv Detail & Related papers (2022-08-12T16:48:10Z) - Masked Autoencoders Are Scalable Vision Learners [60.97703494764904]
Masked autoencoders (MAE) are scalable self-supervised learners for computer vision.
Our MAE approach is simple: we mask random patches of the input image and reconstruct the missing pixels.
Coupling these two designs enables us to train large models efficiently and effectively.
arXiv Detail & Related papers (2021-11-11T18:46:40Z) - Towards robustness under occlusion for face recognition [0.0]
In this paper, we evaluate the effects of occlusions in the performance of a face recognition pipeline that uses a ResNet backbone.
We designed 8 different occlusion masks which were applied to the input images.
In order to increase robustness under occlusions, we followed two approaches. The first is image inpainting using the pre-trained pluralistic image completion network.
The second is Cutmix, a regularization strategy consisting of mixing training images and their labels using rectangular patches.
arXiv Detail & Related papers (2021-09-19T08:27:57Z) - PatchCleanser: Certifiably Robust Defense against Adversarial Patches
for Any Image Classifier [30.559585856170216]
adversarial patch attack against image classification models aims to inject adversarially crafted pixels within a localized restricted image region (i.e., a patch)
We propose PatchCleanser as a robust defense against adversarial patches that is compatible with any image classification model.
We extensively evaluate our defense on the ImageNet, ImageNette, CIFAR-10, CIFAR-100, SVHN, and Flowers-102 datasets.
arXiv Detail & Related papers (2021-08-20T12:09:33Z) - Block-wise Image Transformation with Secret Key for Adversarially Robust
Defense [17.551718914117917]
We develop three algorithms to realize the proposed transformation: Pixel Shuffling, Bit Flipping, and FFX Encryption.
Experiments were carried out on the CIFAR-10 and ImageNet datasets by using both black-box and white-box attacks.
The proposed defense achieves high accuracy close to that of using clean images even under adaptive attacks for the first time.
arXiv Detail & Related papers (2020-10-02T06:07:12Z) - (De)Randomized Smoothing for Certifiable Defense against Patch Attacks [136.79415677706612]
We introduce a certifiable defense against patch attacks that guarantees for a given image and patch attack size.
Our method is related to the broad class of randomized smoothing robustness schemes.
Our results effectively establish a new state-of-the-art of certifiable defense against patch attacks on CIFAR-10 and ImageNet.
arXiv Detail & Related papers (2020-02-25T08:39:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.