The Best Defense is a Good Offense: Adversarial Augmentation against
Adversarial Attacks
- URL: http://arxiv.org/abs/2305.14188v1
- Date: Tue, 23 May 2023 16:07:58 GMT
- Title: The Best Defense is a Good Offense: Adversarial Augmentation against
Adversarial Attacks
- Authors: Iuri Frosio and Jan Kautz
- Abstract summary: $A5$ is a framework to craft a defensive perturbation to guarantee that any attack towards the input in hand will fail.
We show effective on-the-fly defensive augmentation with a robustifier network that ignores the ground truth label.
We also show how to apply $A5$ to create certifiably robust physical objects.
- Score: 91.56314751983133
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many defenses against adversarial attacks (\eg robust classifiers,
randomization, or image purification) use countermeasures put to work only
after the attack has been crafted. We adopt a different perspective to
introduce $A^5$ (Adversarial Augmentation Against Adversarial Attacks), a novel
framework including the first certified preemptive defense against adversarial
attacks. The main idea is to craft a defensive perturbation to guarantee that
any attack (up to a given magnitude) towards the input in hand will fail. To
this aim, we leverage existing automatic perturbation analysis tools for neural
networks. We study the conditions to apply $A^5$ effectively, analyze the
importance of the robustness of the to-be-defended classifier, and inspect the
appearance of the robustified images. We show effective on-the-fly defensive
augmentation with a robustifier network that ignores the ground truth label,
and demonstrate the benefits of robustifier and classifier co-training. In our
tests, $A^5$ consistently beats state of the art certified defenses on MNIST,
CIFAR10, FashionMNIST and Tinyimagenet. We also show how to apply $A^5$ to
create certifiably robust physical objects. Our code at
https://github.com/NVlabs/A5 allows experimenting on a wide range of scenarios
beyond the man-in-the-middle attack tested here, including the case of physical
attacks.
Related papers
- Robust width: A lightweight and certifiable adversarial defense [0.0]
Adversarial examples are intentionally constructed to cause the model to make incorrect predictions or classifications.
In this work, we study an adversarial defense based on the robust width property (RWP), which was recently introduced for compressed sensing.
We show that a specific input purification scheme based on the RWP gives theoretical robustness guarantees for images that are approximately sparse.
arXiv Detail & Related papers (2024-05-24T22:50:50Z) - IDEA: Invariant Defense for Graph Adversarial Robustness [60.0126873387533]
We propose an Invariant causal DEfense method against adversarial Attacks (IDEA)
We derive node-based and structure-based invariance objectives from an information-theoretic perspective.
Experiments demonstrate that IDEA attains state-of-the-art defense performance under all five attacks on all five datasets.
arXiv Detail & Related papers (2023-05-25T07:16:00Z) - Multiple Perturbation Attack: Attack Pixelwise Under Different
$\ell_p$-norms For Better Adversarial Performance [17.57296795184232]
Adversarial attacks and defenses are usually likened to a cat-and-mouse game in which defenders and attackers evolve over the time.
We come up with a natural approach: combining various $ell_p$ gradient projections on a pixel level to achieve a joint adversarial perturbation.
Specifically, we learn how to perturb each pixel to maximize the attack performance, while maintaining the overall visual imperceptibility of adversarial examples.
arXiv Detail & Related papers (2022-12-05T15:38:37Z) - Increasing Confidence in Adversarial Robustness Evaluations [53.2174171468716]
We propose a test to identify weak attacks and thus weak defense evaluations.
Our test slightly modifies a neural network to guarantee the existence of an adversarial example for every sample.
For eleven out of thirteen previously-published defenses, the original evaluation of the defense fails our test, while stronger attacks that break these defenses pass it.
arXiv Detail & Related papers (2022-06-28T13:28:13Z) - LAFEAT: Piercing Through Adversarial Defenses with Latent Features [15.189068478164337]
We show that latent features in certain "robust" models are surprisingly susceptible to adversarial attacks.
We introduce a unified $ell_infty$-norm white-box attack algorithm which harnesses latent features in its gradient descent steps, namely LAFEAT.
arXiv Detail & Related papers (2021-04-19T13:22:20Z) - Defenses Against Multi-Sticker Physical Domain Attacks on Classifiers [24.809185168969066]
One important attack can fool a classifier by placing black and white stickers on an object such as a road sign.
There are currently no defenses designed to protect against this attack.
In this paper, we propose new defenses that can protect against multi-sticker attacks.
arXiv Detail & Related papers (2021-01-26T19:59:28Z) - Attack Agnostic Adversarial Defense via Visual Imperceptible Bound [70.72413095698961]
This research aims to design a defense model that is robust within a certain bound against both seen and unseen adversarial attacks.
The proposed defense model is evaluated on the MNIST, CIFAR-10, and Tiny ImageNet databases.
The proposed algorithm is attack agnostic, i.e. it does not require any knowledge of the attack algorithm.
arXiv Detail & Related papers (2020-10-25T23:14:26Z) - Certified Defenses for Adversarial Patches [72.65524549598126]
Adversarial patch attacks are among the most practical threat models against real-world computer vision systems.
This paper studies certified and empirical defenses against patch attacks.
arXiv Detail & Related papers (2020-03-14T19:57:31Z) - Deflecting Adversarial Attacks [94.85315681223702]
We present a new approach towards ending this cycle where we "deflect" adversarial attacks by causing the attacker to produce an input that resembles the attack's target class.
We first propose a stronger defense based on Capsule Networks that combines three detection mechanisms to achieve state-of-the-art detection performance.
arXiv Detail & Related papers (2020-02-18T06:59:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.