Is Approximation Universally Defensive Against Adversarial Attacks in
Deep Neural Networks?
- URL: http://arxiv.org/abs/2112.01555v1
- Date: Thu, 2 Dec 2021 19:01:36 GMT
- Title: Is Approximation Universally Defensive Against Adversarial Attacks in
Deep Neural Networks?
- Authors: Ayesha Siddique, Khaza Anuarul Hoque
- Abstract summary: We present an adversarial analysis of different approximate DNN accelerators (AxDNNs) using the state-of-the-art approximate multipliers.
Our results demonstrate that adversarial attacks on AxDNNs can cause 53% accuracy loss whereas the same attack may lead to almost no accuracy loss.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Approximate computing is known for its effectiveness in improvising the
energy efficiency of deep neural network (DNN) accelerators at the cost of
slight accuracy loss. Very recently, the inexact nature of approximate
components, such as approximate multipliers have also been reported successful
in defending adversarial attacks on DNNs models. Since the approximation errors
traverse through the DNN layers as masked or unmasked, this raises a key
research question-can approximate computing always offer a defense against
adversarial attacks in DNNs, i.e., are they universally defensive? Towards
this, we present an extensive adversarial robustness analysis of different
approximate DNN accelerators (AxDNNs) using the state-of-the-art approximate
multipliers. In particular, we evaluate the impact of ten adversarial attacks
on different AxDNNs using the MNIST and CIFAR-10 datasets. Our results
demonstrate that adversarial attacks on AxDNNs can cause 53% accuracy loss
whereas the same attack may lead to almost no accuracy loss (as low as 0.06%)
in the accurate DNN. Thus, approximate computing cannot be referred to as a
universal defense strategy against adversarial attacks.
Related papers
- Exploring DNN Robustness Against Adversarial Attacks Using Approximate Multipliers [1.3820778058499328]
Deep Neural Networks (DNNs) have advanced in many real-world applications, such as healthcare and autonomous driving.
Their high computational complexity and vulnerability to adversarial attacks are ongoing challenges.
By uniformly replacing accurate multipliers for state-of-the-art approximate ones in DNN layer models, we explore the DNNs robustness against various adversarial attacks in a feasible time.
arXiv Detail & Related papers (2024-04-17T18:03:12Z) - Robust Overfitting Does Matter: Test-Time Adversarial Purification With FGSM [5.592360872268223]
Defense strategies usually train deep neural networks (DNNs) for a specific adversarial attack method and can achieve good robustness in defense against this type of adversarial attack.
However, when subjected to evaluations involving unfamiliar attack modalities, empirical evidence reveals a pronounced deterioration in the robustness of DNNs.
Most defense methods often sacrifice the accuracy of clean examples in order to improve the adversarial robustness of DNNs.
arXiv Detail & Related papers (2024-03-18T03:54:01Z) - Not So Robust After All: Evaluating the Robustness of Deep Neural
Networks to Unseen Adversarial Attacks [5.024667090792856]
Deep neural networks (DNNs) have gained prominence in various applications, such as classification, recognition, and prediction.
A fundamental attribute of traditional DNNs is their vulnerability to modifications in input data, which has resulted in the investigation of adversarial attacks.
This study aims to challenge the efficacy and generalization of contemporary defense mechanisms against adversarial attacks.
arXiv Detail & Related papers (2023-08-12T05:21:34Z) - IDEA: Invariant Defense for Graph Adversarial Robustness [60.0126873387533]
We propose an Invariant causal DEfense method against adversarial Attacks (IDEA)
We derive node-based and structure-based invariance objectives from an information-theoretic perspective.
Experiments demonstrate that IDEA attains state-of-the-art defense performance under all five attacks on all five datasets.
arXiv Detail & Related papers (2023-05-25T07:16:00Z) - Security-Aware Approximate Spiking Neural Networks [0.0]
We analyze the robustness of AxSNNs with different structural parameters and approximation levels under two-based gradient and two neuromorphic attacks.
We propose two novel defense methods, i.e., precision scaling and approximate quantization-aware filtering, for securing AxSNNs.
Our results demonstrate that AxSNNs are more prone to adversarial attacks than AccSNNs, but precision scaling and AQF significantly improve the robustness of AxSNNs.
arXiv Detail & Related papers (2023-01-12T19:23:15Z) - A Mask-Based Adversarial Defense Scheme [3.759725391906588]
Adversarial attacks hamper the functionality and accuracy of Deep Neural Networks (DNNs)
We propose a new Mask-based Adversarial Defense scheme (MAD) for DNNs to mitigate the negative effect from adversarial attacks.
arXiv Detail & Related papers (2022-04-21T12:55:27Z) - Robustness of Bayesian Neural Networks to White-Box Adversarial Attacks [55.531896312724555]
Bayesian Networks (BNNs) are robust and adept at handling adversarial attacks by incorporating randomness.
We create our BNN model, called BNN-DenseNet, by fusing Bayesian inference (i.e., variational Bayes) to the DenseNet architecture.
An adversarially-trained BNN outperforms its non-Bayesian, adversarially-trained counterpart in most experiments.
arXiv Detail & Related papers (2021-11-16T16:14:44Z) - Exploring Architectural Ingredients of Adversarially Robust Deep Neural
Networks [98.21130211336964]
Deep neural networks (DNNs) are known to be vulnerable to adversarial attacks.
In this paper, we investigate the impact of network width and depth on the robustness of adversarially trained DNNs.
arXiv Detail & Related papers (2021-10-07T23:13:33Z) - Perceptual Adversarial Robustness: Defense Against Unseen Threat Models [58.47179090632039]
A key challenge in adversarial robustness is the lack of a precise mathematical characterization of human perception.
Under the neural perceptual threat model, we develop novel perceptual adversarial attacks and defenses.
Because the NPTM is very broad, we find that Perceptual Adrial Training (PAT) against a perceptual attack gives robustness against many other types of adversarial attacks.
arXiv Detail & Related papers (2020-06-22T22:40:46Z) - Adversarial Attacks and Defenses on Graphs: A Review, A Tool and
Empirical Studies [73.39668293190019]
Adversary attacks can be easily fooled by small perturbation on the input.
Graph Neural Networks (GNNs) have been demonstrated to inherit this vulnerability.
In this survey, we categorize existing attacks and defenses, and review the corresponding state-of-the-art methods.
arXiv Detail & Related papers (2020-03-02T04:32:38Z) - Defending against Backdoor Attack on Deep Neural Networks [98.45955746226106]
We study the so-called textitbackdoor attack, which injects a backdoor trigger to a small portion of training data.
Experiments show that our method could effectively decrease the attack success rate, and also hold a high classification accuracy for clean images.
arXiv Detail & Related papers (2020-02-26T02:03:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.