Spiking Neural Networks: The Future of Brain-Inspired Computing
- URL: http://arxiv.org/abs/2510.27379v1
- Date: Fri, 31 Oct 2025 11:14:59 GMT
- Title: Spiking Neural Networks: The Future of Brain-Inspired Computing
- Authors: Sales G. Aribe Jr,
- Abstract summary: Spiking Neural Networks (SNNs) represent the latest generation of neural computation.<n>SNNs operate using distinct spike events, making them inherently more energy-efficient and temporally dynamic.<n>This study presents a comprehensive analysis of SNN design models, training algorithms, and multi-dimensional performance metrics.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Spiking Neural Networks (SNNs) represent the latest generation of neural computation, offering a brain-inspired alternative to conventional Artificial Neural Networks (ANNs). Unlike ANNs, which depend on continuous-valued signals, SNNs operate using distinct spike events, making them inherently more energy-efficient and temporally dynamic. This study presents a comprehensive analysis of SNN design models, training algorithms, and multi-dimensional performance metrics, including accuracy, energy consumption, latency, spike count, and convergence behavior. Key neuron models such as the Leaky Integrate-and-Fire (LIF) and training strategies, including surrogate gradient descent, ANN-to-SNN conversion, and Spike-Timing Dependent Plasticity (STDP), are examined in depth. Results show that surrogate gradient-trained SNNs closely approximate ANN accuracy (within 1-2%), with faster convergence by the 20th epoch and latency as low as 10 milliseconds. Converted SNNs also achieve competitive performance but require higher spike counts and longer simulation windows. STDP-based SNNs, though slower to converge, exhibit the lowest spike counts and energy consumption (as low as 5 millijoules per inference), making them optimal for unsupervised and low-power tasks. These findings reinforce the suitability of SNNs for energy-constrained, latency-sensitive, and adaptive applications such as robotics, neuromorphic vision, and edge AI systems. While promising, challenges persist in hardware standardization and scalable training. This study concludes that SNNs, with further refinement, are poised to propel the next phase of neuromorphic computing.
Related papers
- A Self-Ensemble Inspired Approach for Effective Training of Binary-Weight Spiking Neural Networks [66.80058515743468]
Training Spiking Neural Networks (SNNs) and Binary Neural Networks (BNNs) is challenging because of the non-differentiable spike generation function.<n>We present a novel perspective on the dynamics of SNNs and their close connection to BNNs through an analysis of the backpropagation process.<n>Specifically, we leverage a structure of multiple shortcuts and a knowledge distillation-based training technique to improve the training of (binary-weight) SNNs.
arXiv Detail & Related papers (2025-08-18T04:11:06Z) - Proxy Target: Bridging the Gap Between Discrete Spiking Neural Networks and Continuous Control [59.65431931190187]
Spiking Neural Networks (SNNs) offer low-latency and energy-efficient decision making on neuromorphic hardware.<n>Most continuous control algorithms for continuous control are designed for Artificial Neural Networks (ANNs)<n>We show that this mismatch destabilizes SNN training and degrades performance.<n>We propose a novel proxy target framework to bridge the gap between discrete SNNs and continuous-control algorithms.
arXiv Detail & Related papers (2025-05-30T03:08:03Z) - STEMS: Spatial-Temporal Mapping Tool For Spiking Neural Networks [5.144074723846297]
Spiking Neural Networks (SNNs) are promising bio-inspired third-generation neural networks.<n>Recent research has trained deep SNN models with accuracy on par with Artificial Neural Networks (ANNs)
arXiv Detail & Related papers (2025-02-05T15:44:15Z) - High-performance deep spiking neural networks with 0.3 spikes per neuron [9.01407445068455]
It is hard to train biologically-inspired spiking neural networks (SNNs) than artificial neural networks (ANNs)
We show that training deep SNN models achieves the exact same performance as that of ANNs.
Our SNN accomplishes high-performance classification with less than 0.3 spikes per neuron, lending itself for an energy-efficient implementation.
arXiv Detail & Related papers (2023-06-14T21:01:35Z) - A Hybrid ANN-SNN Architecture for Low-Power and Low-Latency Visual Perception [27.144985031646932]
Spiking Neural Networks (SNN) are a class of bio-inspired neural networks that promise to bring low-power and low-latency inference to edge devices.
We show for the task of event-based 2D and 3D human pose estimation that our method consumes 88% less power with only a 4% decrease in performance compared to its fully ANN counterparts.
arXiv Detail & Related papers (2023-03-24T17:38:45Z) - Skip Connections in Spiking Neural Networks: An Analysis of Their Effect
on Network Training [0.8602553195689513]
Spiking neural networks (SNNs) have gained attention as a promising alternative to traditional artificial neural networks (ANNs)
In this paper, we study the impact of skip connections on SNNs and propose a hyper parameter optimization technique that adapts models from ANN to SNN.
We demonstrate that optimizing the position, type, and number of skip connections can significantly improve the accuracy and efficiency of SNNs.
arXiv Detail & Related papers (2023-03-23T07:57:32Z) - SPIDE: A Purely Spike-based Method for Training Feedback Spiking Neural
Networks [56.35403810762512]
Spiking neural networks (SNNs) with event-based computation are promising brain-inspired models for energy-efficient applications on neuromorphic hardware.
We study spike-based implicit differentiation on the equilibrium state (SPIDE) that extends the recently proposed training method.
arXiv Detail & Related papers (2023-02-01T04:22:59Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - Hybrid SNN-ANN: Energy-Efficient Classification and Object Detection for
Event-Based Vision [64.71260357476602]
Event-based vision sensors encode local pixel-wise brightness changes in streams of events rather than image frames.
Recent progress in object recognition from event-based sensors has come from conversions of deep neural networks.
We propose a hybrid architecture for end-to-end training of deep neural networks for event-based pattern recognition and object detection.
arXiv Detail & Related papers (2021-12-06T23:45:58Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Enabling Deep Spiking Neural Networks with Hybrid Conversion and Spike
Timing Dependent Backpropagation [10.972663738092063]
Spiking Neural Networks (SNNs) operate with asynchronous discrete events (or spikes)
We present a computationally-efficient training technique for deep SNNs.
We achieve top-1 accuracy of 65.19% for ImageNet dataset on SNN with 250 time steps, which is 10X faster compared to converted SNNs with similar accuracy.
arXiv Detail & Related papers (2020-05-04T19:30:43Z) - SiamSNN: Siamese Spiking Neural Networks for Energy-Efficient Object
Tracking [20.595208488431766]
SiamSNN is the first deep SNN tracker that achieves short latency and low precision loss on the visual object tracking benchmarks OTB2013, VOT2016, and GOT-10k.
SiamSNN notably achieves low energy consumption and real-time on Neuromorphic chip TrueNorth.
arXiv Detail & Related papers (2020-03-17T08:49:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.