A Supervised Learning Algorithm for Multilayer Spiking Neural Networks
Based on Temporal Coding Toward Energy-Efficient VLSI Processor Design
- URL: http://arxiv.org/abs/2001.05348v1
- Date: Wed, 8 Jan 2020 03:37:08 GMT
- Title: A Supervised Learning Algorithm for Multilayer Spiking Neural Networks
Based on Temporal Coding Toward Energy-Efficient VLSI Processor Design
- Authors: Yusuke Sakemi, Kai Morino, Takashi Morie, Kazuyuki Aihara
- Abstract summary: Spiking neural networks (SNNs) are brain-inspired mathematical models with the ability to process information in the form of spikes.
We propose a novel supervised learning algorithm for SNNs based on temporal coding.
- Score: 2.6872737601772956
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spiking neural networks (SNNs) are brain-inspired mathematical models with
the ability to process information in the form of spikes. SNNs are expected to
provide not only new machine-learning algorithms, but also energy-efficient
computational models when implemented in VLSI circuits. In this paper, we
propose a novel supervised learning algorithm for SNNs based on temporal
coding. A spiking neuron in this algorithm is designed to facilitate analog
VLSI implementations with analog resistive memory, by which ultra-high energy
efficiency can be achieved. We also propose several techniques to improve the
performance on a recognition task, and show that the classification accuracy of
the proposed algorithm is as high as that of the state-of-the-art temporal
coding SNN algorithms on the MNIST dataset. Finally, we discuss the robustness
of the proposed SNNs against variations that arise from the device
manufacturing process and are unavoidable in analog VLSI implementation. We
also propose a technique to suppress the effects of variations in the
manufacturing process on the recognition performance.
Related papers
- Stepwise Weighted Spike Coding for Deep Spiking Neural Networks [7.524721345903027]
Spiking Neural Networks (SNNs) seek to mimic the spiking behavior of biological neurons.
We propose a novel Stepwise Weighted Spike (SWS) coding scheme to enhance the encoding of information in spikes.
This approach compresses the spikes by weighting the significance of the spike in each step of neural computation, achieving high performance and low energy consumption.
arXiv Detail & Related papers (2024-08-30T12:39:25Z) - Stochastic Spiking Neural Networks with First-to-Spike Coding [7.955633422160267]
Spiking Neural Networks (SNNs) are known for their bio-plausibility and energy efficiency.
In this work, we explore the merger of novel computing and information encoding schemes in SNN architectures.
We investigate the tradeoffs of our proposal in terms of accuracy, inference latency, spiking sparsity, energy consumption, and datasets.
arXiv Detail & Related papers (2024-04-26T22:52:23Z) - Recent Advances in Scalable Energy-Efficient and Trustworthy Spiking
Neural networks: from Algorithms to Technology [11.479629320025673]
spiking neural networks (SNNs) have become an attractive alternative to deep neural networks for a broad range of signal processing applications.
We describe advances in algorithmic and optimization innovations to efficiently train and scale low-latency, and energy-efficient SNNs.
We discuss the potential path forward for research in building deployable SNN systems.
arXiv Detail & Related papers (2023-12-02T19:47:00Z) - A Hybrid Neural Coding Approach for Pattern Recognition with Spiking
Neural Networks [53.31941519245432]
Brain-inspired spiking neural networks (SNNs) have demonstrated promising capabilities in solving pattern recognition tasks.
These SNNs are grounded on homogeneous neurons that utilize a uniform neural coding for information representation.
In this study, we argue that SNN architectures should be holistically designed to incorporate heterogeneous coding schemes.
arXiv Detail & Related papers (2023-05-26T02:52:12Z) - Open the box of digital neuromorphic processor: Towards effective
algorithm-hardware co-design [0.08431877864777441]
We present a practical approach to enable algorithm designers to accurately benchmark SNN algorithms.
We show the energy efficiency of SNN algorithms for video processing and online learning.
arXiv Detail & Related papers (2023-03-27T14:03:11Z) - SA-CNN: Application to text categorization issues using simulated
annealing-based convolutional neural network optimization [0.0]
Convolutional neural networks (CNNs) are a representative class of deep learning algorithms.
We introduce SA-CNN neural networks for text classification tasks based on Text-CNN neural networks.
arXiv Detail & Related papers (2023-03-13T14:27:34Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Stochastic Markov Gradient Descent and Training Low-Bit Neural Networks [77.34726150561087]
We introduce Gradient Markov Descent (SMGD), a discrete optimization method applicable to training quantized neural networks.
We provide theoretical guarantees of algorithm performance as well as encouraging numerical results.
arXiv Detail & Related papers (2020-08-25T15:48:15Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.