Stochastic sparse adversarial attacks
- URL: http://arxiv.org/abs/2011.12423v4
- Date: Sat, 19 Feb 2022 16:37:56 GMT
- Title: Stochastic sparse adversarial attacks
- Authors: Manon C\'esaire, Lucas Schott, Hatem Hajri, Sylvain Lamprier, and
Patrick Gallinari
- Abstract summary: This paper introduces sparse adversarial attacks (SSAA) as simple, fast and purely noise-based targeted and untargeted attacks of neural network classifiers (NNC)
SSAA are devised by exploiting a small-time expansion idea widely used for Markov processes.
Experiments on small and large datasets (CIFAR-10 and ImageNet) illustrate several advantages of SSAA in comparison with the-state-of-the-art methods.
- Score: 17.43654235818416
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper introduces stochastic sparse adversarial attacks (SSAA), standing
as simple, fast and purely noise-based targeted and untargeted attacks of
neural network classifiers (NNC). SSAA offer new examples of sparse (or $L_0$)
attacks for which only few methods have been proposed previously. These attacks
are devised by exploiting a small-time expansion idea widely used for Markov
processes. Experiments on small and large datasets (CIFAR-10 and ImageNet)
illustrate several advantages of SSAA in comparison with the-state-of-the-art
methods. For instance, in the untargeted case, our method called Voting Folded
Gaussian Attack (VFGA) scales efficiently to ImageNet and achieves a
significantly lower $L_0$ score than SparseFool (up to $\frac{2}{5}$) while
being faster. Moreover, VFGA achieves better $L_0$ scores on ImageNet than
Sparse-RS when both attacks are fully successful on a large number of samples.
Related papers
- Any Target Can be Offense: Adversarial Example Generation via Generalized Latent Infection [83.72430401516674]
GAKer is able to construct adversarial examples to any target class.
Our method achieves an approximately $14.13%$ higher attack success rate for unknown classes.
arXiv Detail & Related papers (2024-07-17T03:24:09Z) - GSE: Group-wise Sparse and Explainable Adversarial Attacks [20.068273625719943]
Sparse adversarial attacks fool deep neural networks (DNNs) through minimal pixel perturbations.
Recent efforts have replaced this norm with a sparsity regularizer, such as the nuclear group norm, to craft group-wise adversarial attacks.
We present a two-phase algorithm that generates group-wise attacks within semantically meaningful images.
arXiv Detail & Related papers (2023-11-29T08:26:18Z) - SAIF: Sparse Adversarial and Imperceptible Attack Framework [7.025774823899217]
We propose a novel attack technique called Sparse Adversarial and Interpretable Attack Framework (SAIF)
Specifically, we design imperceptible attacks that contain low-magnitude perturbations at a small number of pixels and leverage these sparse attacks to reveal the vulnerability of classifiers.
SAIF computes highly imperceptible and interpretable adversarial examples, and outperforms state-of-the-art sparse attack methods on the ImageNet dataset.
arXiv Detail & Related papers (2022-12-14T20:28:50Z) - Sparse and Imperceptible Adversarial Attack via a Homotopy Algorithm [93.80082636284922]
Sparse adversarial attacks can fool deep networks (DNNs) by only perturbing a few pixels.
Recent efforts combine it with another l_infty perturbation on magnitudes.
We propose a homotopy algorithm to tackle the sparsity and neural perturbation framework.
arXiv Detail & Related papers (2021-06-10T20:11:36Z) - Transferable Sparse Adversarial Attack [62.134905824604104]
We introduce a generator architecture to alleviate the overfitting issue and thus efficiently craft transferable sparse adversarial examples.
Our method achieves superior inference speed, 700$times$ faster than other optimization-based methods.
arXiv Detail & Related papers (2021-05-31T06:44:58Z) - Patch-wise++ Perturbation for Adversarial Targeted Attacks [132.58673733817838]
We propose a patch-wise iterative method (PIM) aimed at crafting adversarial examples with high transferability.
Specifically, we introduce an amplification factor to the step size in each iteration, and one pixel's overall gradient overflowing the $epsilon$-constraint is properly assigned to its surrounding regions.
Compared with the current state-of-the-art attack methods, we significantly improve the success rate by 35.9% for defense models and 32.7% for normally trained models.
arXiv Detail & Related papers (2020-12-31T08:40:42Z) - Composite Adversarial Attacks [57.293211764569996]
Adversarial attack is a technique for deceiving Machine Learning (ML) models.
In this paper, a new procedure called Composite Adrial Attack (CAA) is proposed for automatically searching the best combination of attack algorithms.
CAA beats 10 top attackers on 11 diverse defenses with less elapsed time.
arXiv Detail & Related papers (2020-12-10T03:21:16Z) - GreedyFool: Distortion-Aware Sparse Adversarial Attack [138.55076781355206]
Modern deep neural networks (DNNs) are vulnerable to adversarial samples.
Sparse adversarial samples can fool the target model by only perturbing a few pixels.
We propose a novel two-stage distortion-aware greedy-based method dubbed as "GreedyFool"
arXiv Detail & Related papers (2020-10-26T17:59:07Z) - Are L2 adversarial examples intrinsically different? [14.77179227968466]
We unravel the properties that can intrinsically differentiate adversarial examples and normal inputs through theoretical analysis.
We achieve a recovered classification accuracy of up to 99% on MNIST, 89% on CIFAR, and 87% on ImageNet subsets against $L$ attacks.
arXiv Detail & Related papers (2020-02-28T03:42:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.