HIRE-SNN: Harnessing the Inherent Robustness of Energy-Efficient Deep
Spiking Neural Networks by Training with Crafted Input Noise
- URL: http://arxiv.org/abs/2110.11417v1
- Date: Wed, 6 Oct 2021 16:48:48 GMT
- Title: HIRE-SNN: Harnessing the Inherent Robustness of Energy-Efficient Deep
Spiking Neural Networks by Training with Crafted Input Noise
- Authors: Souvik Kundu, Massoud Pedram, Peter A. Beerel
- Abstract summary: We present an SNN training algorithm that uses crafted input noise and incurs no additional training time.
Compared to standard trained direct input SNNs, our trained models yield improved classification accuracy of up to 13.7%.
Our models also outperform inherently robust SNNs trained on rate-coded inputs with improved or similar classification performance on attack-generated images.
- Score: 13.904091056365765
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Low-latency deep spiking neural networks (SNNs) have become a promising
alternative to conventional artificial neural networks (ANNs) because of their
potential for increased energy efficiency on event-driven neuromorphic
hardware. Neural networks, including SNNs, however, are subject to various
adversarial attacks and must be trained to remain resilient against such
attacks for many applications. Nevertheless, due to prohibitively high training
costs associated with SNNs, analysis, and optimization of deep SNNs under
various adversarial attacks have been largely overlooked. In this paper, we
first present a detailed analysis of the inherent robustness of low-latency
SNNs against popular gradient-based attacks, namely fast gradient sign method
(FGSM) and projected gradient descent (PGD). Motivated by this analysis, to
harness the model robustness against these attacks we present an SNN training
algorithm that uses crafted input noise and incurs no additional training time.
To evaluate the merits of our algorithm, we conducted extensive experiments
with variants of VGG and ResNet on both CIFAR-10 and CIFAR-100 datasets.
Compared to standard trained direct input SNNs, our trained models yield
improved classification accuracy of up to 13.7% and 10.1% on FGSM and PGD
attack-generated images, respectively, with negligible loss in clean image
accuracy. Our models also outperform inherently robust SNNs trained on
rate-coded inputs with improved or similar classification performance on
attack-generated images while having up to 25x and 4.6x lower latency and
computation energy, respectively.
Related papers
- SPIDE: A Purely Spike-based Method for Training Feedback Spiking Neural
Networks [56.35403810762512]
Spiking neural networks (SNNs) with event-based computation are promising brain-inspired models for energy-efficient applications on neuromorphic hardware.
We study spike-based implicit differentiation on the equilibrium state (SPIDE) that extends the recently proposed training method.
arXiv Detail & Related papers (2023-02-01T04:22:59Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - Toward Robust Spiking Neural Network Against Adversarial Perturbation [22.56553160359798]
spiking neural networks (SNNs) are deployed increasingly in real-world efficiency critical applications.
Researchers have already demonstrated an SNN can be attacked with adversarial examples.
To the best of our knowledge, this is the first analysis on robust training of SNNs.
arXiv Detail & Related papers (2022-04-12T21:26:49Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Spatial-Temporal-Fusion BNN: Variational Bayesian Feature Layer [77.78479877473899]
We design a spatial-temporal-fusion BNN for efficiently scaling BNNs to large models.
Compared to vanilla BNNs, our approach can greatly reduce the training time and the number of parameters, which contributes to scale BNNs efficiently.
arXiv Detail & Related papers (2021-12-12T17:13:14Z) - Advancing Deep Residual Learning by Solving the Crux of Degradation in
Spiking Neural Networks [21.26300397341615]
Residual learning and shortcuts have been evidenced as an important approach for training deep neural networks.
This paper proposes a novel residual block for SNNs, which is able to significantly extend the depth of directly trained SNNs.
arXiv Detail & Related papers (2021-12-09T06:29:00Z) - Towards Low-Latency Energy-Efficient Deep SNNs via Attention-Guided
Compression [12.37129078618206]
Deep spiking neural networks (SNNs) have emerged as a potential alternative to traditional deep learning frameworks.
Most SNN training frameworks yield large inference latency which translates to increased spike activity and reduced energy efficiency.
This paper presents a non-iterative SNN training technique thatachieves ultra-high compression with reduced spiking activity.
arXiv Detail & Related papers (2021-07-16T18:23:36Z) - BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by
Adversarial Attacks [65.2021953284622]
We study robustness of CNNs against white-box and black-box adversarial attacks.
Results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks.
arXiv Detail & Related papers (2021-03-14T20:43:19Z) - Weight-Covariance Alignment for Adversarially Robust Neural Networks [15.11530043291188]
We propose a new SNN that achieves state-of-the-art performance without relying on adversarial training.
While existing SNNs inject learned or hand-tuned isotropic noise, our SNN learns an anisotropic noise distribution to optimize a learning-theoretic bound on adversarial robustness.
arXiv Detail & Related papers (2020-10-17T19:28:35Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Inherent Adversarial Robustness of Deep Spiking Neural Networks: Effects
of Discrete Input Encoding and Non-Linear Activations [9.092733355328251]
Spiking Neural Network (SNN) is a potential candidate for inherent robustness against adversarial attacks.
In this work, we demonstrate that adversarial accuracy of SNNs under gradient-based attacks is higher than their non-spiking counterparts.
arXiv Detail & Related papers (2020-03-23T17:20:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.