Attacking the Spike: On the Transferability and Security of Spiking
Neural Networks to Adversarial Examples
- URL: http://arxiv.org/abs/2209.03358v3
- Date: Fri, 13 Oct 2023 21:19:53 GMT
- Title: Attacking the Spike: On the Transferability and Security of Spiking
Neural Networks to Adversarial Examples
- Authors: Nuo Xu, Kaleel Mahmood, Haowen Fang, Ethan Rathbun, Caiwen Ding, Wujie
Wen
- Abstract summary: Spiking neural networks (SNNs) have attracted much attention for their high energy efficiency and for recent advances in their classification performance.
Unlike traditional deep learning approaches, the analysis and study of the robustness of SNNs to adversarial examples remain relatively underdeveloped.
We show that successful white-box adversarial attacks on SNNs are highly dependent on the underlying surrogate gradient technique.
- Score: 19.227133993690504
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spiking neural networks (SNNs) have attracted much attention for their high
energy efficiency and for recent advances in their classification performance.
However, unlike traditional deep learning approaches, the analysis and study of
the robustness of SNNs to adversarial examples remain relatively
underdeveloped. In this work, we focus on advancing the adversarial attack side
of SNNs and make three major contributions. First, we show that successful
white-box adversarial attacks on SNNs are highly dependent on the underlying
surrogate gradient technique, even in the case of adversarially trained SNNs.
Second, using the best surrogate gradient technique, we analyze the
transferability of adversarial attacks on SNNs and other state-of-the-art
architectures like Vision Transformers (ViTs) and Big Transfer Convolutional
Neural Networks (CNNs). We demonstrate that the adversarial examples created by
non-SNN architectures are not misclassified often by SNNs. Third, due to the
lack of an ubiquitous white-box attack that is effective across both the SNN
and CNN/ViT domains, we develop a new white-box attack, the Auto Self-Attention
Gradient Attack (Auto-SAGA). Our novel attack generates adversarial examples
capable of fooling both SNN and non-SNN models simultaneously. Auto-SAGA is as
much as $91.1\%$ more effective on SNN/ViT model ensembles and provides a
$3\times$ boost in attack effectiveness on adversarially trained SNN ensembles
compared to conventional white-box attacks like Auto-PGD. Our experiments and
analyses are broad and rigorous covering three datasets (CIFAR-10, CIFAR-100
and ImageNet), five different white-box attacks and nineteen classifier models
(seven for each CIFAR dataset and five models for ImageNet).
Related papers
- Data Poisoning-based Backdoor Attack Framework against Supervised Learning Rules of Spiking Neural Networks [3.9444202574850755]
Spiking Neural Networks (SNNs) are known for their low energy consumption and high robustness.
This paper explores the robustness performance of SNNs trained by supervised learning rules under backdoor attacks.
arXiv Detail & Related papers (2024-09-24T02:15:19Z) - Quantization Aware Attack: Enhancing Transferable Adversarial Attacks by Model Quantization [57.87950229651958]
Quantized neural networks (QNNs) have received increasing attention in resource-constrained scenarios due to their exceptional generalizability.
Previous studies claim that transferability is difficult to achieve across QNNs with different bitwidths.
We propose textitquantization aware attack (QAA) which fine-tunes a QNN substitute model with a multiple-bitwidth training objective.
arXiv Detail & Related papers (2023-05-10T03:46:53Z) - Security-Aware Approximate Spiking Neural Networks [0.0]
We analyze the robustness of AxSNNs with different structural parameters and approximation levels under two-based gradient and two neuromorphic attacks.
We propose two novel defense methods, i.e., precision scaling and approximate quantization-aware filtering, for securing AxSNNs.
Our results demonstrate that AxSNNs are more prone to adversarial attacks than AccSNNs, but precision scaling and AQF significantly improve the robustness of AxSNNs.
arXiv Detail & Related papers (2023-01-12T19:23:15Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - Toward Robust Spiking Neural Network Against Adversarial Perturbation [22.56553160359798]
spiking neural networks (SNNs) are deployed increasingly in real-world efficiency critical applications.
Researchers have already demonstrated an SNN can be attacked with adversarial examples.
To the best of our knowledge, this is the first analysis on robust training of SNNs.
arXiv Detail & Related papers (2022-04-12T21:26:49Z) - Robustness of Bayesian Neural Networks to White-Box Adversarial Attacks [55.531896312724555]
Bayesian Networks (BNNs) are robust and adept at handling adversarial attacks by incorporating randomness.
We create our BNN model, called BNN-DenseNet, by fusing Bayesian inference (i.e., variational Bayes) to the DenseNet architecture.
An adversarially-trained BNN outperforms its non-Bayesian, adversarially-trained counterpart in most experiments.
arXiv Detail & Related papers (2021-11-16T16:14:44Z) - Robustness of Graph Neural Networks at Scale [63.45769413975601]
We study how to attack and defend Graph Neural Networks (GNNs) at scale.
We propose two sparsity-aware first-order optimization attacks that maintain an efficient representation.
We show that common surrogate losses are not well-suited for global attacks on GNNs.
arXiv Detail & Related papers (2021-10-26T21:31:17Z) - KATANA: Simple Post-Training Robustness Using Test Time Augmentations [49.28906786793494]
A leading defense against such attacks is adversarial training, a technique in which a DNN is trained to be robust to adversarial attacks.
We propose a new simple and easy-to-use technique, KATANA, for robustifying an existing pretrained DNN without modifying its weights.
Our strategy achieves state-of-the-art adversarial robustness on diverse attacks with minimal compromise on the natural images' classification.
arXiv Detail & Related papers (2021-09-16T19:16:00Z) - BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by
Adversarial Attacks [65.2021953284622]
We study robustness of CNNs against white-box and black-box adversarial attacks.
Results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks.
arXiv Detail & Related papers (2021-03-14T20:43:19Z) - Inherent Adversarial Robustness of Deep Spiking Neural Networks: Effects
of Discrete Input Encoding and Non-Linear Activations [9.092733355328251]
Spiking Neural Network (SNN) is a potential candidate for inherent robustness against adversarial attacks.
In this work, we demonstrate that adversarial accuracy of SNNs under gradient-based attacks is higher than their non-spiking counterparts.
arXiv Detail & Related papers (2020-03-23T17:20:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.