Black-box adversarial attacks using Evolution Strategies
- URL: http://arxiv.org/abs/2104.15064v1
- Date: Fri, 30 Apr 2021 15:33:07 GMT
- Title: Black-box adversarial attacks using Evolution Strategies
- Authors: Hao Qiu, Leonardo Lucio Custode, Giovanni Iacca
- Abstract summary: We study the generation of black-box adversarial attacks for image classification tasks.
Our results show that the attacked neural networks can be, in most cases, easily fooled by all the algorithms under comparison.
Some black-box optimization algorithms may be better in "harder" setups, both in terms of attack success rate and efficiency.
- Score: 3.093890460224435
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the last decade, deep neural networks have proven to be very powerful in
computer vision tasks, starting a revolution in the computer vision and machine
learning fields. However, deep neural networks, usually, are not robust to
perturbations of the input data. In fact, several studies showed that slightly
changing the content of the images can cause a dramatic decrease in the
accuracy of the attacked neural network. Several methods able to generate
adversarial samples make use of gradients, which usually are not available to
an attacker in real-world scenarios. As opposed to this class of attacks,
another class of adversarial attacks, called black-box adversarial attacks,
emerged, which does not make use of information on the gradients, being more
suitable for real-world attack scenarios. In this work, we compare three
well-known evolution strategies on the generation of black-box adversarial
attacks for image classification tasks. While our results show that the
attacked neural networks can be, in most cases, easily fooled by all the
algorithms under comparison, they also show that some black-box optimization
algorithms may be better in "harder" setups, both in terms of attack success
rate and efficiency (i.e., number of queries).
Related papers
- Attacking Graph Neural Networks with Bit Flips: Weisfeiler and Lehman Go Indifferent [0.0]
We propose the first bit flip attack designed specifically for graph neural networks.
Our attack targets the learnable neighborhood aggregation functions in quantized message passing neural networks.
Our findings suggest that exploiting mathematical properties specific to certain graph neural network architectures can significantly increase their vulnerability to bit flip attacks.
arXiv Detail & Related papers (2023-11-02T12:59:32Z) - Dynamics-aware Adversarial Attack of Adaptive Neural Networks [75.50214601278455]
We investigate the dynamics-aware adversarial attack problem of adaptive neural networks.
We propose a Leaded Gradient Method (LGM) and show the significant effects of the lagged gradient.
Our LGM achieves impressive adversarial attack performance compared with the dynamic-unaware attack methods.
arXiv Detail & Related papers (2022-10-15T01:32:08Z) - Art-Attack: Black-Box Adversarial Attack via Evolutionary Art [5.760976250387322]
Deep neural networks (DNNs) have achieved state-of-the-art performance in many tasks but have shown extreme vulnerabilities to attacks generated by adversarial examples.
This paper proposes a gradient-free attack by using a concept of evolutionary art to generate adversarial examples.
arXiv Detail & Related papers (2022-03-07T12:54:09Z) - Thundernna: a white box adversarial attack [0.0]
We develop a first-order method to attack the neural network.
Compared with other first-order attacks, our method has a much higher success rate.
arXiv Detail & Related papers (2021-11-24T07:06:21Z) - Identification of Attack-Specific Signatures in Adversarial Examples [62.17639067715379]
We show that different attack algorithms produce adversarial examples which are distinct not only in their effectiveness but also in how they qualitatively affect their victims.
Our findings suggest that prospective adversarial attacks should be compared not only via their success rates at fooling models but also via deeper downstream effects they have on victims.
arXiv Detail & Related papers (2021-10-13T15:40:48Z) - Deep neural network loses attention to adversarial images [11.650381752104296]
Adversarial algorithms have shown to be effective against neural networks for a variety of tasks.
We show that in the case of Pixel Attack, perturbed pixels call the network attention to themselves or divert the attention from them.
We also show that both attacks affect the saliency map and activation maps differently.
arXiv Detail & Related papers (2021-06-10T11:06:17Z) - BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by
Adversarial Attacks [65.2021953284622]
We study robustness of CNNs against white-box and black-box adversarial attacks.
Results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks.
arXiv Detail & Related papers (2021-03-14T20:43:19Z) - MixNet for Generalized Face Presentation Attack Detection [63.35297510471997]
We have proposed a deep learning-based network termed as textitMixNet to detect presentation attacks.
The proposed algorithm utilizes state-of-the-art convolutional neural network architectures and learns the feature mapping for each attack category.
arXiv Detail & Related papers (2020-10-25T23:01:13Z) - Boosting Gradient for White-Box Adversarial Attacks [60.422511092730026]
We propose a universal adversarial example generation method, called ADV-ReLU, to enhance the performance of gradient based white-box attack algorithms.
Our approach calculates the gradient of the loss function versus network input, maps the values to scores, and selects a part of them to update the misleading gradients.
arXiv Detail & Related papers (2020-10-21T02:13:26Z) - Improving Query Efficiency of Black-box Adversarial Attack [75.71530208862319]
We propose a Neural Process based black-box adversarial attack (NP-Attack)
NP-Attack could greatly decrease the query counts under the black-box setting.
arXiv Detail & Related papers (2020-09-24T06:22:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.