Neurogenesis Dynamics-inspired Spiking Neural Network Training
Acceleration
- URL: http://arxiv.org/abs/2304.12214v1
- Date: Mon, 24 Apr 2023 15:54:22 GMT
- Title: Neurogenesis Dynamics-inspired Spiking Neural Network Training
Acceleration
- Authors: Shaoyi Huang, Haowen Fang, Kaleel Mahmood, Bowen Lei, Nuo Xu, Bin Lei,
Yue Sun, Dongkuan Xu, Wujie Wen, Caiwen Ding
- Abstract summary: Spiking Neural Networks (SNNs) have attracted significant attention for their ability to provide extremely energy-efficient machine intelligence.
We propose a Neurogenesis Dynamics-inspired Spiking Neural Network training acceleration framework, NDSNN.
Our framework is computational efficient and trains a model from scratch with dynamic sparsity without sacrificing model fidelity.
- Score: 25.37391055865312
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Biologically inspired Spiking Neural Networks (SNNs) have attracted
significant attention for their ability to provide extremely energy-efficient
machine intelligence through event-driven operation and sparse activities. As
artificial intelligence (AI) becomes ever more democratized, there is an
increasing need to execute SNN models on edge devices. Existing works adopt
weight pruning to reduce SNN model size and accelerate inference. However,
these methods mainly focus on how to obtain a sparse model for efficient
inference, rather than training efficiency. To overcome these drawbacks, in
this paper, we propose a Neurogenesis Dynamics-inspired Spiking Neural Network
training acceleration framework, NDSNN. Our framework is computational
efficient and trains a model from scratch with dynamic sparsity without
sacrificing model fidelity. Specifically, we design a new drop-and-grow
strategy with decreasing number of non-zero weights, to maintain extreme high
sparsity and high accuracy. We evaluate NDSNN using VGG-16 and ResNet-19 on
CIFAR-10, CIFAR-100 and TinyImageNet. Experimental results show that NDSNN
achieves up to 20.52\% improvement in accuracy on Tiny-ImageNet using ResNet-19
(with a sparsity of 99\%) as compared to other SOTA methods (e.g., Lottery
Ticket Hypothesis (LTH), SET-SNN, RigL-SNN). In addition, the training cost of
NDSNN is only 40.89\% of the LTH training cost on ResNet-19 and 31.35\% of the
LTH training cost on VGG-16 on CIFAR-10.
Related papers
- NAS-BNN: Neural Architecture Search for Binary Neural Networks [55.058512316210056]
We propose a novel neural architecture search scheme for binary neural networks, named NAS-BNN.
Our discovered binary model family outperforms previous BNNs for a wide range of operations (OPs) from 20M to 200M.
In addition, we validate the transferability of these searched BNNs on the object detection task, and our binary detectors with the searched BNNs achieve a novel state-of-the-art result, e.g., 31.6% mAP with 370M OPs, on MS dataset.
arXiv Detail & Related papers (2024-08-28T02:17:58Z) - AT-SNN: Adaptive Tokens for Vision Transformer on Spiking Neural Network [4.525951256256855]
AT-SNN is designed to dynamically adjust the number of tokens processed during inference in SNN-based ViTs with direct training.
We show the effectiveness of AT-SNN in achieving high energy efficiency and accuracy compared to state-of-the-art approaches on the image classification tasks.
arXiv Detail & Related papers (2024-08-22T11:06:18Z) - Training a General Spiking Neural Network with Improved Efficiency and
Minimum Latency [4.503744528661997]
Spiking Neural Networks (SNNs) operate in an event-driven manner and employ binary spike representation.
This paper proposes a general training framework that enhances feature learning and activation efficiency within a limited time step.
arXiv Detail & Related papers (2024-01-05T09:54:44Z) - SNN2ANN: A Fast and Memory-Efficient Training Framework for Spiking
Neural Networks [117.56823277328803]
Spiking neural networks are efficient computation models for low-power environments.
We propose a SNN-to-ANN (SNN2ANN) framework to train the SNN in a fast and memory-efficient way.
Experiment results show that our SNN2ANN-based models perform well on the benchmark datasets.
arXiv Detail & Related papers (2022-06-19T16:52:56Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - Can Deep Neural Networks be Converted to Ultra Low-Latency Spiking
Neural Networks? [3.2108350580418166]
Spiking neural networks (SNNs) operate via binary spikes distributed over time.
SOTA training strategies for SNNs involve conversion from a non-spiking deep neural network (DNN)
We propose a new training algorithm that accurately captures these distributions, minimizing the error between the DNN and converted SNN.
arXiv Detail & Related papers (2021-12-22T18:47:45Z) - HIRE-SNN: Harnessing the Inherent Robustness of Energy-Efficient Deep
Spiking Neural Networks by Training with Crafted Input Noise [13.904091056365765]
We present an SNN training algorithm that uses crafted input noise and incurs no additional training time.
Compared to standard trained direct input SNNs, our trained models yield improved classification accuracy of up to 13.7%.
Our models also outperform inherently robust SNNs trained on rate-coded inputs with improved or similar classification performance on attack-generated images.
arXiv Detail & Related papers (2021-10-06T16:48:48Z) - "BNN - BN = ?": Training Binary Neural Networks without Batch
Normalization [92.23297927690149]
Batch normalization (BN) is a key facilitator and considered essential for state-of-the-art binary neural networks (BNN)
We extend their framework to training BNNs, and for the first time demonstrate that BNs can be completed removed from BNN training and inference regimes.
arXiv Detail & Related papers (2021-04-16T16:46:57Z) - Attentive Graph Neural Networks for Few-Shot Learning [74.01069516079379]
Graph Neural Networks (GNN) has demonstrated the superior performance in many challenging applications, including the few-shot learning tasks.
Despite its powerful capacity to learn and generalize the model from few samples, GNN usually suffers from severe over-fitting and over-smoothing as the model becomes deep.
We propose a novel Attentive GNN to tackle these challenges, by incorporating a triple-attention mechanism.
arXiv Detail & Related papers (2020-07-14T07:43:09Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.