Security-Aware Approximate Spiking Neural Networks
- URL: http://arxiv.org/abs/2301.05264v1
- Date: Thu, 12 Jan 2023 19:23:15 GMT
- Title: Security-Aware Approximate Spiking Neural Networks
- Authors: Syed Tihaam Ahmad, Ayesha Siddique, Khaza Anuarul Hoque
- Abstract summary: We analyze the robustness of AxSNNs with different structural parameters and approximation levels under two-based gradient and two neuromorphic attacks.
We propose two novel defense methods, i.e., precision scaling and approximate quantization-aware filtering, for securing AxSNNs.
Our results demonstrate that AxSNNs are more prone to adversarial attacks than AccSNNs, but precision scaling and AQF significantly improve the robustness of AxSNNs.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Deep Neural Networks (DNNs) and Spiking Neural Networks (SNNs) are both known
for their susceptibility to adversarial attacks. Therefore, researchers in the
recent past have extensively studied the robustness and defense of DNNs and
SNNs under adversarial attacks. Compared to accurate SNNs (AccSNN), approximate
SNNs (AxSNNs) are known to be up to 4X more energy-efficient for ultra-low
power applications. Unfortunately, the robustness of AxSNNs under adversarial
attacks is yet unexplored. In this paper, we first extensively analyze the
robustness of AxSNNs with different structural parameters and approximation
levels under two gradient-based and two neuromorphic attacks. Then, we propose
two novel defense methods, i.e., precision scaling and approximate
quantization-aware filtering (AQF), for securing AxSNNs. We evaluated the
effectiveness of these two defense methods using both static and neuromorphic
datasets. Our results demonstrate that AxSNNs are more prone to adversarial
attacks than AccSNNs, but precision scaling and AQF significantly improve the
robustness of AxSNNs. For instance, a PGD attack on AxSNN results in a 72\%
accuracy loss compared to AccSNN without any attack, whereas the same attack on
the precision-scaled AxSNN leads to only a 17\% accuracy loss in the static
MNIST dataset (4X robustness improvement). Similarly, a Sparse Attack on AxSNN
leads to a 77\% accuracy loss when compared to AccSNN without any attack,
whereas the same attack on an AxSNN with AQF leads to only a 2\% accuracy loss
in the neuromorphic DVS128 Gesture dataset (38X robustness improvement).
Related papers
- RSC-SNN: Exploring the Trade-off Between Adversarial Robustness and Accuracy in Spiking Neural Networks via Randomized Smoothing Coding [17.342181435229573]
Spiking Neural Networks (SNNs) have received widespread attention due to their unique neuronal dynamics and low-power nature.
Previous research empirically shows that SNNs with Poisson coding are more robust than Artificial Neural Networks (ANNs) on small-scale datasets.
This work theoretically demonstrates that SNN's inherent adversarial robustness stems from its Poisson coding.
arXiv Detail & Related papers (2024-07-29T15:26:15Z) - Quantization Aware Attack: Enhancing Transferable Adversarial Attacks by Model Quantization [57.87950229651958]
Quantized neural networks (QNNs) have received increasing attention in resource-constrained scenarios due to their exceptional generalizability.
Previous studies claim that transferability is difficult to achieve across QNNs with different bitwidths.
We propose textitquantization aware attack (QAA) which fine-tunes a QNN substitute model with a multiple-bitwidth training objective.
arXiv Detail & Related papers (2023-05-10T03:46:53Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - Toward Robust Spiking Neural Network Against Adversarial Perturbation [22.56553160359798]
spiking neural networks (SNNs) are deployed increasingly in real-world efficiency critical applications.
Researchers have already demonstrated an SNN can be attacked with adversarial examples.
To the best of our knowledge, this is the first analysis on robust training of SNNs.
arXiv Detail & Related papers (2022-04-12T21:26:49Z) - Is Approximation Universally Defensive Against Adversarial Attacks in
Deep Neural Networks? [0.0]
We present an adversarial analysis of different approximate DNN accelerators (AxDNNs) using the state-of-the-art approximate multipliers.
Our results demonstrate that adversarial attacks on AxDNNs can cause 53% accuracy loss whereas the same attack may lead to almost no accuracy loss.
arXiv Detail & Related papers (2021-12-02T19:01:36Z) - Robustness of Bayesian Neural Networks to White-Box Adversarial Attacks [55.531896312724555]
Bayesian Networks (BNNs) are robust and adept at handling adversarial attacks by incorporating randomness.
We create our BNN model, called BNN-DenseNet, by fusing Bayesian inference (i.e., variational Bayes) to the DenseNet architecture.
An adversarially-trained BNN outperforms its non-Bayesian, adversarially-trained counterpart in most experiments.
arXiv Detail & Related papers (2021-11-16T16:14:44Z) - Exploring Architectural Ingredients of Adversarially Robust Deep Neural
Networks [98.21130211336964]
Deep neural networks (DNNs) are known to be vulnerable to adversarial attacks.
In this paper, we investigate the impact of network width and depth on the robustness of adversarially trained DNNs.
arXiv Detail & Related papers (2021-10-07T23:13:33Z) - HIRE-SNN: Harnessing the Inherent Robustness of Energy-Efficient Deep
Spiking Neural Networks by Training with Crafted Input Noise [13.904091056365765]
We present an SNN training algorithm that uses crafted input noise and incurs no additional training time.
Compared to standard trained direct input SNNs, our trained models yield improved classification accuracy of up to 13.7%.
Our models also outperform inherently robust SNNs trained on rate-coded inputs with improved or similar classification performance on attack-generated images.
arXiv Detail & Related papers (2021-10-06T16:48:48Z) - Neural Architecture Dilation for Adversarial Robustness [56.18555072877193]
A shortcoming of convolutional neural networks is that they are vulnerable to adversarial attacks.
This paper aims to improve the adversarial robustness of the backbone CNNs that have a satisfactory accuracy.
Under a minimal computational overhead, a dilation architecture is expected to be friendly with the standard performance of the backbone CNN.
arXiv Detail & Related papers (2021-08-16T03:58:00Z) - BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by
Adversarial Attacks [65.2021953284622]
We study robustness of CNNs against white-box and black-box adversarial attacks.
Results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks.
arXiv Detail & Related papers (2021-03-14T20:43:19Z) - Inherent Adversarial Robustness of Deep Spiking Neural Networks: Effects
of Discrete Input Encoding and Non-Linear Activations [9.092733355328251]
Spiking Neural Network (SNN) is a potential candidate for inherent robustness against adversarial attacks.
In this work, we demonstrate that adversarial accuracy of SNNs under gradient-based attacks is higher than their non-spiking counterparts.
arXiv Detail & Related papers (2020-03-23T17:20:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.