Parallel Spiking Unit for Efficient Training of Spiking Neural Networks
- URL: http://arxiv.org/abs/2402.00449v3
- Date: Sat, 8 Jun 2024 01:26:47 GMT
- Title: Parallel Spiking Unit for Efficient Training of Spiking Neural Networks
- Authors: Yang Li, Yinqian Sun, Xiang He, Yiting Dong, Dongcheng Zhao, Yi Zeng,
- Abstract summary: Spiking Neural Networks (SNNs) are used to advance artificial intelligence.
SNNs are hampered by their inherent sequential computational dependency.
This paper introduces the innovative Parallel Spiking Unit (PSU) and its two derivatives, the Input-aware PSU (IPSU) and Reset-aware PSU (RPSU)
These variants skillfully decouple the leaky integration and firing mechanisms in spiking neurons while probabilistically managing the reset process.
- Score: 8.912926151352888
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Efficient parallel computing has become a pivotal element in advancing artificial intelligence. Yet, the deployment of Spiking Neural Networks (SNNs) in this domain is hampered by their inherent sequential computational dependency. This constraint arises from the need for each time step's processing to rely on the preceding step's outcomes, significantly impeding the adaptability of SNN models to massively parallel computing environments. Addressing this challenge, our paper introduces the innovative Parallel Spiking Unit (PSU) and its two derivatives, the Input-aware PSU (IPSU) and Reset-aware PSU (RPSU). These variants skillfully decouple the leaky integration and firing mechanisms in spiking neurons while probabilistically managing the reset process. By preserving the fundamental computational attributes of the spiking neuron model, our approach enables the concurrent computation of all membrane potential instances within the SNN, facilitating parallel spike output generation and substantially enhancing computational efficiency. Comprehensive testing across various datasets, including static and sequential images, Dynamic Vision Sensor (DVS) data, and speech datasets, demonstrates that the PSU and its variants not only significantly boost performance and simulation speed but also augment the energy efficiency of SNNs through enhanced sparsity in neural activity. These advancements underscore the potential of our method in revolutionizing SNN deployment for high-performance parallel computing applications.
Related papers
- Neuromorphic Wireless Split Computing with Multi-Level Spikes [69.73249913506042]
In neuromorphic computing, spiking neural networks (SNNs) perform inference tasks, offering significant efficiency gains for workloads involving sequential data.
Recent advances in hardware and software have demonstrated that embedding a few bits of payload in each spike exchanged between the spiking neurons can further enhance inference accuracy.
This paper investigates a wireless neuromorphic split computing architecture employing multi-level SNNs.
arXiv Detail & Related papers (2024-11-07T14:08:35Z) - Time-independent Spiking Neuron via Membrane Potential Estimation for Efficient Spiking Neural Networks [4.142699381024752]
computational inefficiency of spiking neural networks (SNNs) is primarily due to the sequential updates of membrane potential.
We propose Membrane Potential Estimation Parallel Spiking Neurons (MPE-PSN), a parallel computation method for spiking neurons.
Our approach exhibits promise for enhancing computational efficiency, particularly under conditions of elevated neuron density.
arXiv Detail & Related papers (2024-09-08T05:14:22Z) - Parallel Proportional Fusion of Spiking Quantum Neural Network for Optimizing Image Classification [10.069224006497162]
We introduce a novel architecture termed Parallel Proportional Fusion of Quantum and Spiking Neural Networks (PPF-QSNN)
The proposed PPF-QSNN outperforms both the existing spiking neural network and the serial quantum neural network across metrics such as accuracy, loss, and robustness.
This study lays the groundwork for the advancement and application of quantum advantage in artificial intelligent computations.
arXiv Detail & Related papers (2024-04-01T10:35:35Z) - SpikingJelly: An open-source machine learning infrastructure platform
for spike-based intelligence [51.6943465041708]
Spiking neural networks (SNNs) aim to realize brain-inspired intelligence on neuromorphic chips with high energy efficiency.
We contribute a full-stack toolkit for pre-processing neuromorphic datasets, building deep SNNs, optimizing their parameters, and deploying SNNs on neuromorphic chips.
arXiv Detail & Related papers (2023-10-25T13:15:17Z) - A Multi-Head Ensemble Multi-Task Learning Approach for Dynamical
Computation Offloading [62.34538208323411]
We propose a multi-head ensemble multi-task learning (MEMTL) approach with a shared backbone and multiple prediction heads (PHs)
MEMTL outperforms benchmark methods in both the inference accuracy and mean square error without requiring additional training data.
arXiv Detail & Related papers (2023-09-02T11:01:16Z) - SPIDE: A Purely Spike-based Method for Training Feedback Spiking Neural
Networks [56.35403810762512]
Spiking neural networks (SNNs) with event-based computation are promising brain-inspired models for energy-efficient applications on neuromorphic hardware.
We study spike-based implicit differentiation on the equilibrium state (SPIDE) that extends the recently proposed training method.
arXiv Detail & Related papers (2023-02-01T04:22:59Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Scalable Nanophotonic-Electronic Spiking Neural Networks [3.9918594409417576]
Spiking neural networks (SNN) provide a new computational paradigm capable of highly parallelized, real-time processing.
Photonic devices are ideal for the design of high-bandwidth, parallel architectures matching the SNN computational paradigm.
Co-integrated CMOS and SiPh technologies are well-suited to the design of scalable SNN computing architectures.
arXiv Detail & Related papers (2022-08-28T06:10:06Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - A Supervised Learning Algorithm for Multilayer Spiking Neural Networks
Based on Temporal Coding Toward Energy-Efficient VLSI Processor Design [2.6872737601772956]
Spiking neural networks (SNNs) are brain-inspired mathematical models with the ability to process information in the form of spikes.
We propose a novel supervised learning algorithm for SNNs based on temporal coding.
arXiv Detail & Related papers (2020-01-08T03:37:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.