PDPGD: Primal-Dual Proximal Gradient Descent Adversarial Attack
- URL: http://arxiv.org/abs/2106.01538v1
- Date: Thu, 3 Jun 2021 01:45:48 GMT
- Title: PDPGD: Primal-Dual Proximal Gradient Descent Adversarial Attack
- Authors: Alexander Matyasko, Lap-Pui Chau
- Abstract summary: State-of-the-art deep neural networks are sensitive to small input perturbations.
Many defence methods have been proposed that attempt to improve robustness to adversarial noise.
evaluating adversarial robustness has proven to be extremely challenging.
- Score: 92.94132883915876
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: State-of-the-art deep neural networks are sensitive to small input
perturbations. Since the discovery of this intriguing vulnerability, many
defence methods have been proposed that attempt to improve robustness to
adversarial noise. Fast and accurate attacks are required to compare various
defence methods. However, evaluating adversarial robustness has proven to be
extremely challenging. Existing norm minimisation adversarial attacks require
thousands of iterations (e.g. Carlini & Wagner attack), are limited to the
specific norms (e.g. Fast Adaptive Boundary), or produce sub-optimal results
(e.g. Brendel & Bethge attack). On the other hand, PGD attack, which is fast,
general and accurate, ignores the norm minimisation penalty and solves a
simpler perturbation-constrained problem. In this work, we introduce a fast,
general and accurate adversarial attack that optimises the original non-convex
constrained minimisation problem. We interpret optimising the Lagrangian of the
adversarial attack optimisation problem as a two-player game: the first player
minimises the Lagrangian wrt the adversarial noise; the second player maximises
the Lagrangian wrt the regularisation penalty. Our attack algorithm
simultaneously optimises primal and dual variables to find the minimal
adversarial perturbation. In addition, for non-smooth $l_p$-norm minimisation,
such as $l_{\infty}$-, $l_1$-, and $l_0$-norms, we introduce primal-dual
proximal gradient descent attack. We show in the experiments that our attack
outperforms current state-of-the-art $l_{\infty}$-, $l_2$-, $l_1$-, and
$l_0$-attacks on MNIST, CIFAR-10 and Restricted ImageNet datasets against
unregularised and adversarially trained models.
Related papers
- $σ$-zero: Gradient-based Optimization of $\ell_0$-norm Adversarial Examples [14.17412770504598]
We show that $ell_infty$-norm constraints can be used to craft input perturbations.
We propose a novel $ell_infty$-norm attack called $sigma$-norm.
It outperforms all competing adversarial attacks in terms of success, size, and efficiency.
arXiv Detail & Related papers (2024-02-02T20:08:11Z) - Adversarial Attacks on Gaussian Process Bandits [47.84198626686564]
We propose various adversarial attack methods with differing assumptions on the attacker's strength and prior information.
Our goal is to understand adversarial attacks on GP bandits from both a theoretical and practical perspective.
We demonstrate that adversarial attacks on GP bandits can succeed in forcing the algorithm towards $mathcalR_rm target$ even with a low attack budget.
arXiv Detail & Related papers (2021-10-16T02:39:10Z) - Sparse and Imperceptible Adversarial Attack via a Homotopy Algorithm [93.80082636284922]
Sparse adversarial attacks can fool deep networks (DNNs) by only perturbing a few pixels.
Recent efforts combine it with another l_infty perturbation on magnitudes.
We propose a homotopy algorithm to tackle the sparsity and neural perturbation framework.
arXiv Detail & Related papers (2021-06-10T20:11:36Z) - Transferable Sparse Adversarial Attack [62.134905824604104]
We introduce a generator architecture to alleviate the overfitting issue and thus efficiently craft transferable sparse adversarial examples.
Our method achieves superior inference speed, 700$times$ faster than other optimization-based methods.
arXiv Detail & Related papers (2021-05-31T06:44:58Z) - Fast Minimum-norm Adversarial Attacks through Adaptive Norm Constraints [29.227720674726413]
We propose a fast minimum-norm (FMN) attack that works with different $ell_p$-norm perturbation models.
Experiments show that FMN significantly outperforms existing attacks in terms of convergence speed and time.
arXiv Detail & Related papers (2021-02-25T12:56:26Z) - Composite Adversarial Attacks [57.293211764569996]
Adversarial attack is a technique for deceiving Machine Learning (ML) models.
In this paper, a new procedure called Composite Adrial Attack (CAA) is proposed for automatically searching the best combination of attack algorithms.
CAA beats 10 top attackers on 11 diverse defenses with less elapsed time.
arXiv Detail & Related papers (2020-12-10T03:21:16Z) - Toward Adversarial Robustness via Semi-supervised Robust Training [93.36310070269643]
Adrial examples have been shown to be the severe threat to deep neural networks (DNNs)
We propose a novel defense method, the robust training (RT), by jointly minimizing two separated risks ($R_stand$ and $R_rob$)
arXiv Detail & Related papers (2020-03-16T02:14:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.