Training Full Spike Neural Networks via Auxiliary Accumulation Pathway
- URL: http://arxiv.org/abs/2301.11929v1
- Date: Fri, 27 Jan 2023 02:33:40 GMT
- Title: Training Full Spike Neural Networks via Auxiliary Accumulation Pathway
- Authors: Guangyao Chen, Peixi Peng, Guoqi Li, Yonghong Tian
- Abstract summary: Spiking Neural Networks (SNNs) are gaining more and more attention.
The binary spike propagation of the Full-Spike Neural Networks (FSNN) with limited time steps is prone to significant information loss.
This paper proposes a novel Dual-Stream Training (DST) method which adds a detachable Auxiliary Accumulation Pathway (AAP) to the full spiking residual networks.
- Score: 36.971293004654655
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Due to the binary spike signals making converting the traditional high-power
multiply-accumulation (MAC) into a low-power accumulation (AC) available, the
brain-inspired Spiking Neural Networks (SNNs) are gaining more and more
attention. However, the binary spike propagation of the Full-Spike Neural
Networks (FSNN) with limited time steps is prone to significant information
loss. To improve performance, several state-of-the-art SNN models trained from
scratch inevitably bring many non-spike operations. The non-spike operations
cause additional computational consumption and may not be deployed on some
neuromorphic hardware where only spike operation is allowed. To train a
large-scale FSNN with high performance, this paper proposes a novel Dual-Stream
Training (DST) method which adds a detachable Auxiliary Accumulation Pathway
(AAP) to the full spiking residual networks. The accumulation in AAP could
compensate for the information loss during the forward and backward of full
spike propagation, and facilitate the training of the FSNN. In the test phase,
the AAP could be removed and only the FSNN remained. This not only keeps the
lower energy consumption but also makes our model easy to deploy. Moreover, for
some cases where the non-spike operations are available, the APP could also be
retained in test inference and improve feature discrimination by introducing a
little non-spike consumption. Extensive experiments on ImageNet, DVS Gesture,
and CIFAR10-DVS datasets demonstrate the effectiveness of DST.
Related papers
- Learning A Spiking Neural Network for Efficient Image Deraining [20.270365030042623]
We present an Efficient Spiking Deraining Network, called ESDNet.
Our work is motivated by the observation that rain pixel values will lead to a more pronounced intensity of spike signals in SNNs.
We introduce a gradient proxy strategy to directly train the model for overcoming the challenge of training.
arXiv Detail & Related papers (2024-05-10T07:19:58Z) - SynA-ResNet: Spike-driven ResNet Achieved through OR Residual Connection [10.702093960098104]
Spiking Neural Networks (SNNs) have garnered substantial attention in brain-like computing for their biological fidelity and the capacity to execute energy-efficient spike-driven operations.
We propose a novel training paradigm that first accumulates a large amount of redundant information through OR Residual Connection (ORRC)
We then filters out the redundant information using the Synergistic Attention (SynA) module, which promotes feature extraction in the backbone while suppressing the influence of noise and useless features in the shortcuts.
arXiv Detail & Related papers (2023-11-11T13:36:27Z) - Deep Multi-Threshold Spiking-UNet for Image Processing [51.88730892920031]
This paper introduces the novel concept of Spiking-UNet for image processing, which combines the power of Spiking Neural Networks (SNNs) with the U-Net architecture.
To achieve an efficient Spiking-UNet, we face two primary challenges: ensuring high-fidelity information propagation through the network via spikes and formulating an effective training strategy.
Experimental results show that, on image segmentation and denoising, our Spiking-UNet achieves comparable performance to its non-spiking counterpart.
arXiv Detail & Related papers (2023-07-20T16:00:19Z) - SPIDE: A Purely Spike-based Method for Training Feedback Spiking Neural
Networks [56.35403810762512]
Spiking neural networks (SNNs) with event-based computation are promising brain-inspired models for energy-efficient applications on neuromorphic hardware.
We study spike-based implicit differentiation on the equilibrium state (SPIDE) that extends the recently proposed training method.
arXiv Detail & Related papers (2023-02-01T04:22:59Z) - Multi-Level Firing with Spiking DS-ResNet: Enabling Better and Deeper
Directly-Trained Spiking Neural Networks [19.490903216456758]
Spiking neural networks (SNNs) are neural networks with asynchronous discrete and sparse characteristics.
We propose a multi-level firing (MLF) method based on the existing spiking-suppressed residual network (spiking DS-ResNet)
arXiv Detail & Related papers (2022-10-12T16:39:46Z) - Online Training Through Time for Spiking Neural Networks [66.7744060103562]
Spiking neural networks (SNNs) are promising brain-inspired energy-efficient models.
Recent progress in training methods has enabled successful deep SNNs on large-scale tasks with low latency.
We propose online training through time (OTTT) for SNNs, which is derived from BPTT to enable forward-in-time learning.
arXiv Detail & Related papers (2022-10-09T07:47:56Z) - Adaptive-SpikeNet: Event-based Optical Flow Estimation using Spiking
Neural Networks with Learnable Neuronal Dynamics [6.309365332210523]
Spiking Neural Networks (SNNs) with their neuro-inspired event-driven processing can efficiently handle asynchronous data.
We propose an adaptive fully-spiking framework with learnable neuronal dynamics to alleviate the spike vanishing problem.
Our experiments on datasets show an average reduction of 13% in average endpoint error (AEE) compared to state-of-the-art ANNs.
arXiv Detail & Related papers (2022-09-21T21:17:56Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - DIET-SNN: Direct Input Encoding With Leakage and Threshold Optimization
in Deep Spiking Neural Networks [8.746046482977434]
DIET-SNN is a low-deep spiking network that is trained with gradient descent to optimize the membrane leak and the firing threshold.
We evaluate DIET-SNN on image classification tasks from CIFAR and ImageNet datasets on VGG and ResNet architectures.
We achieve top-1 accuracy of 69% with 5 timesteps (inference latency) on the ImageNet dataset with 12x less compute energy than an equivalent standard ANN.
arXiv Detail & Related papers (2020-08-09T05:07:17Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.