Adversarial Defense via Image Denoising with Chaotic Encryption
- URL: http://arxiv.org/abs/2203.10290v1
- Date: Sat, 19 Mar 2022 10:25:02 GMT
- Title: Adversarial Defense via Image Denoising with Chaotic Encryption
- Authors: Shi Hu, Eric Nalisnick, Max Welling
- Abstract summary: We propose a novel defense that assumes everything but a private key will be made available to the attacker.
Our framework uses an image denoising procedure coupled with encryption via a discretized Baker map.
- Score: 65.48888274263756
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the literature on adversarial examples, white box and black box attacks
have received the most attention. The adversary is assumed to have either full
(white) or no (black) access to the defender's model. In this work, we focus on
the equally practical gray box setting, assuming an attacker has partial
information. We propose a novel defense that assumes everything but a private
key will be made available to the attacker. Our framework uses an image
denoising procedure coupled with encryption via a discretized Baker map.
Extensive testing against adversarial images (e.g. FGSM, PGD) crafted using
various gradients shows that our defense achieves significantly better results
on CIFAR-10 and CIFAR-100 than the state-of-the-art gray box defenses in both
natural and adversarial accuracy.
Related papers
- Gradient Masking All-at-Once: Ensemble Everything Everywhere Is Not Robust [65.95797963483729]
Ensemble everything everywhere is a defense to adversarial examples.
We show that this defense is not robust to adversarial attack.
We then use standard adaptive attack techniques to reduce the defense's robust accuracy.
arXiv Detail & Related papers (2024-11-22T10:17:32Z) - Improving Adversarial Robustness via Decoupled Visual Representation Masking [65.73203518658224]
In this paper, we highlight two novel properties of robust features from the feature distribution perspective.
We find that state-of-the-art defense methods aim to address both of these mentioned issues well.
Specifically, we propose a simple but effective defense based on decoupled visual representation masking.
arXiv Detail & Related papers (2024-06-16T13:29:41Z) - Counter-Samples: A Stateless Strategy to Neutralize Black Box Adversarial Attacks [2.9815109163161204]
Our paper presents a novel defence against black box attacks, where attackers use the victim model as an oracle to craft their adversarial examples.
Unlike traditional preprocessing defences that rely on sanitizing input samples, our strategy counters the attack process itself.
We demonstrate that our approach is remarkably effective against state-of-the-art black box attacks and outperforms existing defences for both the CIFAR-10 and ImageNet datasets.
arXiv Detail & Related papers (2024-03-14T10:59:54Z) - The Best Defense is a Good Offense: Adversarial Augmentation against
Adversarial Attacks [91.56314751983133]
$A5$ is a framework to craft a defensive perturbation to guarantee that any attack towards the input in hand will fail.
We show effective on-the-fly defensive augmentation with a robustifier network that ignores the ground truth label.
We also show how to apply $A5$ to create certifiably robust physical objects.
arXiv Detail & Related papers (2023-05-23T16:07:58Z) - Fighting Gradients with Gradients: Dynamic Defenses against Adversarial
Attacks [72.59081183040682]
We propose dynamic defenses, to adapt the model and input during testing, by defensive entropy minimization (dent)
dent improves the robustness of adversarially-trained defenses and nominally-trained models against white-box, black-box, and adaptive attacks on CIFAR-10/100 and ImageNet.
arXiv Detail & Related papers (2021-05-18T17:55:07Z) - Ensemble of Models Trained by Key-based Transformed Images for
Adversarially Robust Defense Against Black-box Attacks [17.551718914117917]
We propose a voting ensemble of models trained by using block-wise transformed images with secret keys for an adversarially robust defense.
Key-based adversarial defenses were demonstrated to outperform state-of-the-art defenses against gradient-based (white-box) attacks.
We aim to enhance robustness against black-box attacks by using a voting ensemble of models.
arXiv Detail & Related papers (2020-11-16T02:48:37Z) - Attack Agnostic Adversarial Defense via Visual Imperceptible Bound [70.72413095698961]
This research aims to design a defense model that is robust within a certain bound against both seen and unseen adversarial attacks.
The proposed defense model is evaluated on the MNIST, CIFAR-10, and Tiny ImageNet databases.
The proposed algorithm is attack agnostic, i.e. it does not require any knowledge of the attack algorithm.
arXiv Detail & Related papers (2020-10-25T23:14:26Z) - Beware the Black-Box: on the Robustness of Recent Defenses to
Adversarial Examples [11.117775891953018]
We expand upon the analysis of these defenses to include adaptive blackbox attacks.
Our investigation is done using two blackbox adversarial models and six widely studied adversarial attacks for CIFAR-10 and FashionNISTM datasets.
Our results paint a clear picture: defenses need both thorough white-box and blackbox analyses to be considered secure.
arXiv Detail & Related papers (2020-06-18T22:29:12Z) - Encryption Inspired Adversarial Defense for Visual Classification [17.551718914117917]
We propose a new adversarial defense inspired by image encryption methods.
The proposed method utilizes a block-wise pixel shuffling with a secret key.
It achieves high accuracy (91.55 on clean images and (89.66 on adversarial examples with noise distance of 8/255 on CIFAR-10 dataset)
arXiv Detail & Related papers (2020-05-16T14:18:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.