Multiple Perturbation Attack: Attack Pixelwise Under Different
$\ell_p$-norms For Better Adversarial Performance
- URL: http://arxiv.org/abs/2212.03069v2
- Date: Wed, 7 Dec 2022 18:30:33 GMT
- Title: Multiple Perturbation Attack: Attack Pixelwise Under Different
$\ell_p$-norms For Better Adversarial Performance
- Authors: Ngoc N. Tran, Anh Tuan Bui, Dinh Phung, Trung Le
- Abstract summary: Adversarial attacks and defenses are usually likened to a cat-and-mouse game in which defenders and attackers evolve over the time.
We come up with a natural approach: combining various $ell_p$ gradient projections on a pixel level to achieve a joint adversarial perturbation.
Specifically, we learn how to perturb each pixel to maximize the attack performance, while maintaining the overall visual imperceptibility of adversarial examples.
- Score: 17.57296795184232
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Adversarial machine learning has been both a major concern and a hot topic
recently, especially with the ubiquitous use of deep neural networks in the
current landscape. Adversarial attacks and defenses are usually likened to a
cat-and-mouse game in which defenders and attackers evolve over the time. On
one hand, the goal is to develop strong and robust deep networks that are
resistant to malicious actors. On the other hand, in order to achieve that, we
need to devise even stronger adversarial attacks to challenge these defense
models. Most of existing attacks employs a single $\ell_p$ distance (commonly,
$p\in\{1,2,\infty\}$) to define the concept of closeness and performs steepest
gradient ascent w.r.t. this $p$-norm to update all pixels in an adversarial
example in the same way. These $\ell_p$ attacks each has its own pros and cons;
and there is no single attack that can successfully break through defense
models that are robust against multiple $\ell_p$ norms simultaneously.
Motivated by these observations, we come up with a natural approach: combining
various $\ell_p$ gradient projections on a pixel level to achieve a joint
adversarial perturbation. Specifically, we learn how to perturb each pixel to
maximize the attack performance, while maintaining the overall visual
imperceptibility of adversarial examples. Finally, through various experiments
with standardized benchmarks, we show that our method outperforms most current
strong attacks across state-of-the-art defense mechanisms, while retaining its
ability to remain clean visually.
Related papers
- Gradient Masking All-at-Once: Ensemble Everything Everywhere Is Not Robust [65.95797963483729]
Ensemble everything everywhere is a defense to adversarial examples.
We show that this defense is not robust to adversarial attack.
We then use standard adaptive attack techniques to reduce the defense's robust accuracy.
arXiv Detail & Related papers (2024-11-22T10:17:32Z) - Deep Adversarial Defense Against Multilevel-Lp Attacks [5.604868766260297]
This paper introduces a computationally efficient multilevel $ell_p$ defense, called the Efficient Robust Mode Connectivity (EMRC) method.
Similar to analytical continuation approaches used in continuous optimization, the method blends two $p$-specific adversarially optimal models.
We present experiments demonstrating that our approach performs better on various attacks as compared to AT-$ell_infty$, E-AT, and MSD.
arXiv Detail & Related papers (2024-07-12T13:30:00Z) - Meta Invariance Defense Towards Generalizable Robustness to Unknown Adversarial Attacks [62.036798488144306]
Current defense mainly focuses on the known attacks, but the adversarial robustness to the unknown attacks is seriously overlooked.
We propose an attack-agnostic defense method named Meta Invariance Defense (MID)
We show that MID simultaneously achieves robustness to the imperceptible adversarial perturbations in high-level image classification and attack-suppression in low-level robust image regeneration.
arXiv Detail & Related papers (2024-04-04T10:10:38Z) - The Best Defense is a Good Offense: Adversarial Augmentation against
Adversarial Attacks [91.56314751983133]
$A5$ is a framework to craft a defensive perturbation to guarantee that any attack towards the input in hand will fail.
We show effective on-the-fly defensive augmentation with a robustifier network that ignores the ground truth label.
We also show how to apply $A5$ to create certifiably robust physical objects.
arXiv Detail & Related papers (2023-05-23T16:07:58Z) - GLOW: Global Layout Aware Attacks for Object Detection [27.46902978168904]
Adversarial attacks aim to perturb images such that a predictor outputs incorrect results.
We present first approach that copes with various attack requests by generating global layout-aware adversarial attacks.
In experiment, we design multiple types of attack requests and validate our ideas on MS validation set.
arXiv Detail & Related papers (2023-02-27T22:01:34Z) - LAFEAT: Piercing Through Adversarial Defenses with Latent Features [15.189068478164337]
We show that latent features in certain "robust" models are surprisingly susceptible to adversarial attacks.
We introduce a unified $ell_infty$-norm white-box attack algorithm which harnesses latent features in its gradient descent steps, namely LAFEAT.
arXiv Detail & Related papers (2021-04-19T13:22:20Z) - Patch-wise++ Perturbation for Adversarial Targeted Attacks [132.58673733817838]
We propose a patch-wise iterative method (PIM) aimed at crafting adversarial examples with high transferability.
Specifically, we introduce an amplification factor to the step size in each iteration, and one pixel's overall gradient overflowing the $epsilon$-constraint is properly assigned to its surrounding regions.
Compared with the current state-of-the-art attack methods, we significantly improve the success rate by 35.9% for defense models and 32.7% for normally trained models.
arXiv Detail & Related papers (2020-12-31T08:40:42Z) - Patch-wise Attack for Fooling Deep Neural Network [153.59832333877543]
We propose a patch-wise iterative algorithm -- a black-box attack towards mainstream normally trained and defense models.
We significantly improve the success rate by 9.2% for defense models and 3.7% for normally trained models on average.
arXiv Detail & Related papers (2020-07-14T01:50:22Z) - Deflecting Adversarial Attacks [94.85315681223702]
We present a new approach towards ending this cycle where we "deflect" adversarial attacks by causing the attacker to produce an input that resembles the attack's target class.
We first propose a stronger defense based on Capsule Networks that combines three detection mechanisms to achieve state-of-the-art detection performance.
arXiv Detail & Related papers (2020-02-18T06:59:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.