Ternary Spike: Learning Ternary Spikes for Spiking Neural Networks
- URL: http://arxiv.org/abs/2312.06372v2
- Date: Sun, 17 Dec 2023 02:19:42 GMT
- Title: Ternary Spike: Learning Ternary Spikes for Spiking Neural Networks
- Authors: Yufei Guo, Yuanpei Chen, Xiaode Liu, Weihang Peng, Yuhan Zhang, Xuhui
Huang, Zhe Ma
- Abstract summary: Spiking Neural Network (SNN) is a biologically inspired neural network infrastructures.
In this paper, we propose a ternary spike neuron to transmit information.
We show that the ternary spike can consistently outperform state-of-the-art methods.
- Score: 19.304952813634994
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Spiking Neural Network (SNN), as one of the biologically inspired neural
network infrastructures, has drawn increasing attention recently. It adopts
binary spike activations to transmit information, thus the multiplications of
activations and weights can be substituted by additions, which brings high
energy efficiency. However, in the paper, we theoretically and experimentally
prove that the binary spike activation map cannot carry enough information,
thus causing information loss and resulting in accuracy decreasing. To handle
the problem, we propose a ternary spike neuron to transmit information. The
ternary spike neuron can also enjoy the event-driven and multiplication-free
operation advantages of the binary spike neuron but will boost the information
capacity. Furthermore, we also embed a trainable factor in the ternary spike
neuron to learn the suitable spike amplitude, thus our SNN will adopt different
spike amplitudes along layers, which can better suit the phenomenon that the
membrane potential distributions are different along layers. To retain the
efficiency of the vanilla ternary spike, the trainable ternary spike SNN will
be converted to a standard one again via a re-parameterization technique in the
inference. Extensive experiments with several popular network structures over
static and dynamic datasets show that the ternary spike can consistently
outperform state-of-the-art methods. Our code is open-sourced at
https://github.com/yfguo91/Ternary-Spike.
Related papers
- Fully Spiking Actor Network with Intra-layer Connections for
Reinforcement Learning [51.386945803485084]
We focus on the task where the agent needs to learn multi-dimensional deterministic policies to control.
Most existing spike-based RL methods take the firing rate as the output of SNNs, and convert it to represent continuous action space (i.e., the deterministic policy) through a fully-connected layer.
To develop a fully spiking actor network without any floating-point matrix operations, we draw inspiration from the non-spiking interneurons found in insects.
arXiv Detail & Related papers (2024-01-09T07:31:34Z) - Deep Multi-Threshold Spiking-UNet for Image Processing [51.88730892920031]
This paper introduces the novel concept of Spiking-UNet for image processing, which combines the power of Spiking Neural Networks (SNNs) with the U-Net architecture.
To achieve an efficient Spiking-UNet, we face two primary challenges: ensuring high-fidelity information propagation through the network via spikes and formulating an effective training strategy.
Experimental results show that, on image segmentation and denoising, our Spiking-UNet achieves comparable performance to its non-spiking counterpart.
arXiv Detail & Related papers (2023-07-20T16:00:19Z) - MSAT: Biologically Inspired Multi-Stage Adaptive Threshold for
Conversion of Spiking Neural Networks [11.392893261073594]
Spiking Neural Networks (SNNs) can do inference with low power consumption due to their spike sparsity.
ANN-SNN conversion is an efficient way to achieve deep SNNs by converting well-trained Artificial Neural Networks (ANNs)
Existing methods commonly use constant threshold for conversion, which prevents neurons from rapidly delivering spikes to deeper layers.
arXiv Detail & Related papers (2023-03-23T07:18:08Z) - SPIDE: A Purely Spike-based Method for Training Feedback Spiking Neural
Networks [56.35403810762512]
Spiking neural networks (SNNs) with event-based computation are promising brain-inspired models for energy-efficient applications on neuromorphic hardware.
We study spike-based implicit differentiation on the equilibrium state (SPIDE) that extends the recently proposed training method.
arXiv Detail & Related papers (2023-02-01T04:22:59Z) - Exploiting High Performance Spiking Neural Networks with Efficient
Spiking Patterns [4.8416725611508244]
Spiking Neural Networks (SNNs) use discrete spike sequences to transmit information, which significantly mimics the information transmission of the brain.
This paper introduces the dynamic Burst pattern and designs the Leaky Integrate and Fire or Burst (LIFB) neuron that can make a trade-off between short-time performance and dynamic temporal performance.
arXiv Detail & Related papers (2023-01-29T04:22:07Z) - Desire Backpropagation: A Lightweight Training Algorithm for Multi-Layer
Spiking Neural Networks based on Spike-Timing-Dependent Plasticity [13.384228628766236]
Spiking neural networks (SNNs) are a viable alternative to conventional artificial neural networks.
We present desire backpropagation, a method to derive the desired spike activity of all neurons, including the hidden ones.
We trained three-layer networks to classify MNIST and Fashion-MNIST images and reached an accuracy of 98.41% and 87.56%, respectively.
arXiv Detail & Related papers (2022-11-10T08:32:13Z) - A temporally and spatially local spike-based backpropagation algorithm
to enable training in hardware [0.0]
Spiking Neural Networks (SNNs) have emerged as a hardware efficient architecture for classification tasks.
There have been several attempts to adopt the powerful backpropagation (BP) technique used in non-spiking artificial neural networks (ANNs)
arXiv Detail & Related papers (2022-07-20T08:57:53Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - The fine line between dead neurons and sparsity in binarized spiking
neural networks [1.370633147306388]
Spiking neural networks can compensate for quantization error by encoding information or processing discretized quantities.
In this paper, we propose the use of threshold annealing' as a warm-up method for firing thresholds.
We show it enables the propagation of spikes across multiple layers where neurons would otherwise cease to fire, and in doing so, achieve highly competitive results on four diverse datasets.
arXiv Detail & Related papers (2022-01-28T03:33:12Z) - Event-based Video Reconstruction via Potential-assisted Spiking Neural
Network [48.88510552931186]
Bio-inspired neural networks can potentially lead to greater computational efficiency on event-driven hardware.
We propose a novel Event-based Video reconstruction framework based on a fully Spiking Neural Network (EVSNN)
We find that the spiking neurons have the potential to store useful temporal information (memory) to complete such time-dependent tasks.
arXiv Detail & Related papers (2022-01-25T02:05:20Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.