On the Matrix-Free Generation of Adversarial Perturbations for Black-Box
Attacks
- URL: http://arxiv.org/abs/2002.07317v1
- Date: Tue, 18 Feb 2020 00:50:26 GMT
- Title: On the Matrix-Free Generation of Adversarial Perturbations for Black-Box
Attacks
- Authors: Hisaichi Shibata, Shouhei Hanaoka, Yukihiro Nomura, Naoto Hayashi and
Osamu Abe
- Abstract summary: In this paper, we propose a practical generation method of such adversarial perturbation to be applied to black-box attacks.
The attackers generate such perturbation without invoking inner functions and/or accessing the inner states of a deep neural network.
- Score: 1.199955563466263
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In general, adversarial perturbations superimposed on inputs are realistic
threats for a deep neural network (DNN). In this paper, we propose a practical
generation method of such adversarial perturbation to be applied to black-box
attacks that demand access to an input-output relationship only. Thus, the
attackers generate such perturbation without invoking inner functions and/or
accessing the inner states of a DNN. Unlike the earlier studies, the algorithm
to generate the perturbation presented in this study requires much fewer query
trials. Moreover, to show the effectiveness of the adversarial perturbation
extracted, we experiment with a DNN for semantic segmentation. The result shows
that the network is easily deceived with the perturbation generated than using
uniformly distributed random noise with the same magnitude.
Related papers
- Detecting Adversarial Attacks in Semantic Segmentation via Uncertainty Estimation: A Deep Analysis [12.133306321357999]
We propose an uncertainty-based method for detecting adversarial attacks on neural networks for semantic segmentation.
We conduct a detailed analysis of uncertainty-based detection of adversarial attacks and various state-of-the-art neural networks.
Our numerical experiments show the effectiveness of the proposed uncertainty-based detection method.
arXiv Detail & Related papers (2024-08-19T14:13:30Z) - Not So Robust After All: Evaluating the Robustness of Deep Neural
Networks to Unseen Adversarial Attacks [5.024667090792856]
Deep neural networks (DNNs) have gained prominence in various applications, such as classification, recognition, and prediction.
A fundamental attribute of traditional DNNs is their vulnerability to modifications in input data, which has resulted in the investigation of adversarial attacks.
This study aims to challenge the efficacy and generalization of contemporary defense mechanisms against adversarial attacks.
arXiv Detail & Related papers (2023-08-12T05:21:34Z) - Exploring Adversarial Attacks on Neural Networks: An Explainable
Approach [18.063187159491182]
We analyze the response characteristics of the VGG-16 model when the input images are mixed with adversarial noise and statistically similar Gaussian random noise.
Our work could provide valuable insights into developing more reliable Deep Neural Network (DNN) models.
arXiv Detail & Related papers (2023-03-08T07:59:44Z) - Improved and Interpretable Defense to Transferred Adversarial Examples
by Jacobian Norm with Selective Input Gradient Regularization [31.516568778193157]
Adversarial training (AT) is often adopted to improve the robustness of deep neural networks (DNNs)
In this work, we propose an approach based on Jacobian norm and Selective Input Gradient Regularization (J- SIGR)
Experiments demonstrate that the proposed J- SIGR confers improved robustness against transferred adversarial attacks, and we also show that the predictions from the neural network are easy to interpret.
arXiv Detail & Related papers (2022-07-09T01:06:41Z) - Guided Diffusion Model for Adversarial Purification [103.4596751105955]
Adversarial attacks disturb deep neural networks (DNNs) in various algorithms and frameworks.
We propose a novel purification approach, referred to as guided diffusion model for purification (GDMP)
On our comprehensive experiments across various datasets, the proposed GDMP is shown to reduce the perturbations raised by adversarial attacks to a shallow range.
arXiv Detail & Related papers (2022-05-30T10:11:15Z) - On Procedural Adversarial Noise Attack And Defense [2.5388455804357952]
adversarial examples would inveigle neural networks to make prediction errors with small per- turbations on the input images.
In this paper, we propose two universal adversarial perturbation (UAP) generation methods based on procedural noise functions.
Without changing the semantic representations, the adversarial examples generated via our methods show superior performance on the attack.
arXiv Detail & Related papers (2021-08-10T02:47:01Z) - Combating Adversaries with Anti-Adversaries [118.70141983415445]
In particular, our layer generates an input perturbation in the opposite direction of the adversarial one.
We verify the effectiveness of our approach by combining our layer with both nominally and robustly trained models.
Our anti-adversary layer significantly enhances model robustness while coming at no cost on clean accuracy.
arXiv Detail & Related papers (2021-03-26T09:36:59Z) - Non-Singular Adversarial Robustness of Neural Networks [58.731070632586594]
Adrial robustness has become an emerging challenge for neural network owing to its over-sensitivity to small input perturbations.
We formalize the notion of non-singular adversarial robustness for neural networks through the lens of joint perturbations to data inputs as well as model weights.
arXiv Detail & Related papers (2021-02-23T20:59:30Z) - And/or trade-off in artificial neurons: impact on adversarial robustness [91.3755431537592]
Presence of sufficient number of OR-like neurons in a network can lead to classification brittleness and increased vulnerability to adversarial attacks.
We define AND-like neurons and propose measures to increase their proportion in the network.
Experimental results on the MNIST dataset suggest that our approach holds promise as a direction for further exploration.
arXiv Detail & Related papers (2021-02-15T08:19:05Z) - Network Moments: Extensions and Sparse-Smooth Attacks [59.24080620535988]
We derive exact analytic expressions for the first and second moments of a small piecewise linear (PL) network (Affine, ReLU, Affine) subject to Gaussian input.
We show that the new variance expression can be efficiently approximated leading to much tighter variance estimates.
arXiv Detail & Related papers (2020-06-21T11:36:41Z) - GraN: An Efficient Gradient-Norm Based Detector for Adversarial and
Misclassified Examples [77.99182201815763]
Deep neural networks (DNNs) are vulnerable to adversarial examples and other data perturbations.
GraN is a time- and parameter-efficient method that is easily adaptable to any DNN.
GraN achieves state-of-the-art performance on numerous problem set-ups.
arXiv Detail & Related papers (2020-04-20T10:09:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.