Encryption Inspired Adversarial Defense for Visual Classification
- URL: http://arxiv.org/abs/2005.07998v1
- Date: Sat, 16 May 2020 14:18:07 GMT
- Title: Encryption Inspired Adversarial Defense for Visual Classification
- Authors: MaungMaung AprilPyone and Hitoshi Kiya
- Abstract summary: We propose a new adversarial defense inspired by image encryption methods.
The proposed method utilizes a block-wise pixel shuffling with a secret key.
It achieves high accuracy (91.55 on clean images and (89.66 on adversarial examples with noise distance of 8/255 on CIFAR-10 dataset)
- Score: 17.551718914117917
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conventional adversarial defenses reduce classification accuracy whether or
not a model is under attacks. Moreover, most of image processing based defenses
are defeated due to the problem of obfuscated gradients. In this paper, we
propose a new adversarial defense which is a defensive transform for both
training and test images inspired by perceptual image encryption methods. The
proposed method utilizes a block-wise pixel shuffling method with a secret key.
The experiments are carried out on both adaptive and non-adaptive maximum-norm
bounded white-box attacks while considering obfuscated gradients. The results
show that the proposed defense achieves high accuracy (91.55 %) on clean images
and (89.66 %) on adversarial examples with noise distance of 8/255 on CIFAR-10
dataset. Thus, the proposed defense outperforms state-of-the-art adversarial
defenses including latent adversarial training, adversarial training and
thermometer encoding.
Related papers
- Improving Adversarial Robustness via Decoupled Visual Representation Masking [65.73203518658224]
In this paper, we highlight two novel properties of robust features from the feature distribution perspective.
We find that state-of-the-art defense methods aim to address both of these mentioned issues well.
Specifically, we propose a simple but effective defense based on decoupled visual representation masking.
arXiv Detail & Related papers (2024-06-16T13:29:41Z) - Anomaly Unveiled: Securing Image Classification against Adversarial
Patch Attacks [3.6275442368775512]
Adversarial patch attacks pose a significant threat to the practical deployment of deep learning systems.
In this paper, we investigate the behavior of adversarial patches as anomalies within the distribution of image information.
Our proposed defense mechanism utilizes a clustering-based technique called DBSCAN to isolate anomalous image segments.
arXiv Detail & Related papers (2024-02-09T08:52:47Z) - Adversarial Defense via Image Denoising with Chaotic Encryption [65.48888274263756]
We propose a novel defense that assumes everything but a private key will be made available to the attacker.
Our framework uses an image denoising procedure coupled with encryption via a discretized Baker map.
arXiv Detail & Related papers (2022-03-19T10:25:02Z) - Adversarially Robust Classification by Conditional Generative Model
Inversion [4.913248451323163]
We propose a classification model that does not obfuscate gradients and is robust by construction without assuming prior knowledge about the attack.
Our method casts classification as an optimization problem where we "invert" a conditional generator trained on unperturbed, natural images.
We demonstrate that our model is extremely robust against black-box attacks and has improved robustness against white-box attacks.
arXiv Detail & Related papers (2022-01-12T23:11:16Z) - Error Diffusion Halftoning Against Adversarial Examples [85.11649974840758]
Adversarial examples contain carefully crafted perturbations that can fool deep neural networks into making wrong predictions.
We propose a new image transformation defense based on error diffusion halftoning, and combine it with adversarial training to defend against adversarial examples.
arXiv Detail & Related papers (2021-01-23T07:55:02Z) - Adversarial Robustness by Design through Analog Computing and Synthetic
Gradients [80.60080084042666]
We propose a new defense mechanism against adversarial attacks inspired by an optical co-processor.
In the white-box setting, our defense works by obfuscating the parameters of the random projection.
We find the combination of a random projection and binarization in the optical system also improves robustness against various types of black-box attacks.
arXiv Detail & Related papers (2021-01-06T16:15:29Z) - Ensemble of Models Trained by Key-based Transformed Images for
Adversarially Robust Defense Against Black-box Attacks [17.551718914117917]
We propose a voting ensemble of models trained by using block-wise transformed images with secret keys for an adversarially robust defense.
Key-based adversarial defenses were demonstrated to outperform state-of-the-art defenses against gradient-based (white-box) attacks.
We aim to enhance robustness against black-box attacks by using a voting ensemble of models.
arXiv Detail & Related papers (2020-11-16T02:48:37Z) - Attack Agnostic Adversarial Defense via Visual Imperceptible Bound [70.72413095698961]
This research aims to design a defense model that is robust within a certain bound against both seen and unseen adversarial attacks.
The proposed defense model is evaluated on the MNIST, CIFAR-10, and Tiny ImageNet databases.
The proposed algorithm is attack agnostic, i.e. it does not require any knowledge of the attack algorithm.
arXiv Detail & Related papers (2020-10-25T23:14:26Z) - Block-wise Image Transformation with Secret Key for Adversarially Robust
Defense [17.551718914117917]
We develop three algorithms to realize the proposed transformation: Pixel Shuffling, Bit Flipping, and FFX Encryption.
Experiments were carried out on the CIFAR-10 and ImageNet datasets by using both black-box and white-box attacks.
The proposed defense achieves high accuracy close to that of using clean images even under adaptive attacks for the first time.
arXiv Detail & Related papers (2020-10-02T06:07:12Z) - Online Alternate Generator against Adversarial Attacks [144.45529828523408]
Deep learning models are notoriously sensitive to adversarial examples which are synthesized by adding quasi-perceptible noises on real images.
We propose a portable defense method, online alternate generator, which does not need to access or modify the parameters of the target networks.
The proposed method works by online synthesizing another image from scratch for an input image, instead of removing or destroying adversarial noises.
arXiv Detail & Related papers (2020-09-17T07:11:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.