LM-HT SNN: Enhancing the Performance of SNN to ANN Counterpart through
Learnable Multi-hierarchical Threshold Model
- URL: http://arxiv.org/abs/2402.00411v1
- Date: Thu, 1 Feb 2024 08:10:39 GMT
- Title: LM-HT SNN: Enhancing the Performance of SNN to ANN Counterpart through
Learnable Multi-hierarchical Threshold Model
- Authors: Zecheng Hao, Xinyu Shi, Zhiyu Pan, Yujia Liu, Zhaofei Yu, Tiejun Huang
- Abstract summary: Spiking Neural Network (SNN) has garnered widespread academic interest for its intrinsic ability to transmit information.
In this paper, we propose a novel LM-HT model, which is an equidistant multi-hierarchical model that can dynamically regulate the global input current and membrane potential leakage.
- Score: 43.803689602458
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Compared to traditional Artificial Neural Network (ANN), Spiking Neural
Network (SNN) has garnered widespread academic interest for its intrinsic
ability to transmit information in a more biological-inspired and
energy-efficient manner. However, despite previous efforts to optimize the
learning gradients and model structure of SNNs through various methods, SNNs
still lag behind ANNs in terms of performance to some extent. The recently
proposed multi-threshold model provides more possibilities for further
enhancing the learning capability of SNNs. In this paper, we rigorously analyze
the relationship among the multi-threshold model, vanilla spiking model and
quantized ANNs from a mathematical perspective, then propose a novel LM-HT
model, which is an equidistant multi-hierarchical model that can dynamically
regulate the global input current and membrane potential leakage on the time
dimension. In addition, we note that the direct training algorithm based on the
LM-HT model can seamlessly integrate with the traditional ANN-SNN Conversion
framework. This novel hybrid learning framework can effectively improve the
relatively poor performance of converted SNNs under low time latency. Extensive
experimental results have demonstrated that our LM-HT model can significantly
outperform previous state-of-the-art works on various types of datasets, which
promote SNNs to achieve a brand-new level of performance comparable to
quantized ANNs.
Related papers
- Sign Gradient Descent-based Neuronal Dynamics: ANN-to-SNN Conversion Beyond ReLU Network [10.760652747217668]
Spiking neural network (SNN) is studied in multidisciplinary domains to simulate neuro-scientific mechanisms.
The lack of discrete theory obstructs the practical application of SNN by limiting its performance and nonlinearity support.
We present a new optimization-theoretic perspective of the discrete dynamics of spiking neurons.
arXiv Detail & Related papers (2024-07-01T02:09:20Z) - Advancing Spiking Neural Networks towards Multiscale Spatiotemporal Interaction Learning [10.702093960098106]
Spiking Neural Networks (SNNs) serve as an energy-efficient alternative to Artificial Neural Networks (ANNs)
We have designed a Spiking Multiscale Attention (SMA) module that captures multiscaletemporal interaction information.
Our approach has achieved state-of-the-art results on mainstream neural datasets.
arXiv Detail & Related papers (2024-05-22T14:16:05Z) - Direct Training High-Performance Deep Spiking Neural Networks: A Review of Theories and Methods [33.377770671553336]
Spiking neural networks (SNNs) offer a promising energy-efficient alternative to artificial neural networks (ANNs)
In this paper, we provide a new perspective to summarize the theories and methods for training deep SNNs with high performance.
arXiv Detail & Related papers (2024-05-06T09:58:54Z) - A Hybrid SNN-ANN Network for Event-based Object Detection with Spatial and Temporal Attention [2.5075774828443467]
Event cameras offer high temporal resolution and dynamic range with minimal motion blur, making them promising for object detection tasks.
While Spiking Neural Networks (SNNs) are a natural match for event-based sensory data, Artificial Neural Networks (ANNs) tend to display more stable training dynamics.
We introduce the first Hybrid Attention-based SNN-ANN backbone for object detection using event cameras.
arXiv Detail & Related papers (2024-03-15T10:28:31Z) - Learning Long Sequences in Spiking Neural Networks [0.0]
Spiking neural networks (SNNs) take inspiration from the brain to enable energy-efficient computations.
Recent interest in efficient alternatives to Transformers has given rise to state-of-the-art recurrent architectures named state space models (SSMs)
arXiv Detail & Related papers (2023-12-14T13:30:27Z) - Fully Spiking Denoising Diffusion Implicit Models [61.32076130121347]
Spiking neural networks (SNNs) have garnered considerable attention owing to their ability to run on neuromorphic devices with super-high speeds.
We propose a novel approach fully spiking denoising diffusion implicit model (FSDDIM) to construct a diffusion model within SNNs.
We demonstrate that the proposed method outperforms the state-of-the-art fully spiking generative model.
arXiv Detail & Related papers (2023-12-04T09:07:09Z) - On the Intrinsic Structures of Spiking Neural Networks [66.57589494713515]
Recent years have emerged a surge of interest in SNNs owing to their remarkable potential to handle time-dependent and event-driven data.
There has been a dearth of comprehensive studies examining the impact of intrinsic structures within spiking computations.
This work delves deep into the intrinsic structures of SNNs, by elucidating their influence on the expressivity of SNNs.
arXiv Detail & Related papers (2022-06-21T09:42:30Z) - Universal approximation property of invertible neural networks [76.95927093274392]
Invertible neural networks (INNs) are neural network architectures with invertibility by design.
Thanks to their invertibility and the tractability of Jacobian, INNs have various machine learning applications such as probabilistic modeling, generative modeling, and representation learning.
arXiv Detail & Related papers (2022-04-15T10:45:26Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.