In-Hardware Learning of Multilayer Spiking Neural Networks on a
Neuromorphic Processor
- URL: http://arxiv.org/abs/2105.03649v1
- Date: Sat, 8 May 2021 09:22:21 GMT
- Title: In-Hardware Learning of Multilayer Spiking Neural Networks on a
Neuromorphic Processor
- Authors: Amar Shrestha, Haowen Fang, Daniel Patrick Rider, Zaidao Mei and Qinru
Qiu
- Abstract summary: This work presents a spike-based backpropagation algorithm with biological plausible local update rules and adapts it to fit the constraint in a neuromorphic hardware.
The algorithm is implemented on Intel Loihi chip enabling low power in- hardware supervised online learning of multilayered SNNs for mobile applications.
- Score: 6.816315761266531
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although widely used in machine learning, backpropagation cannot directly be
applied to SNN training and is not feasible on a neuromorphic processor that
emulates biological neuron and synapses. This work presents a spike-based
backpropagation algorithm with biological plausible local update rules and
adapts it to fit the constraint in a neuromorphic hardware. The algorithm is
implemented on Intel Loihi chip enabling low power in-hardware supervised
online learning of multilayered SNNs for mobile applications. We test this
implementation on MNIST, Fashion-MNIST, CIFAR-10 and MSTAR datasets with
promising performance and energy-efficiency, and demonstrate a possibility of
incremental online learning with the implementation.
Related papers
- Scalable Mechanistic Neural Networks [52.28945097811129]
We propose an enhanced neural network framework designed for scientific machine learning applications involving long temporal sequences.
By reformulating the original Mechanistic Neural Network (MNN) we reduce the computational time and space complexities from cubic and quadratic with respect to the sequence length, respectively, to linear.
Extensive experiments demonstrate that S-MNN matches the original MNN in precision while substantially reducing computational resources.
arXiv Detail & Related papers (2024-10-08T14:27:28Z) - EchoSpike Predictive Plasticity: An Online Local Learning Rule for Spiking Neural Networks [4.644628459389789]
Spiking Neural Networks (SNNs) are attractive due to their potential in applications requiring low power and memory.
"EchoSpike Predictive Plasticity" (ESPP) learning rule is a pioneering online local learning rule.
ESPP represents a significant advancement in developing biologically plausible self-supervised learning models for neuromorphic computing at the edge.
arXiv Detail & Related papers (2024-05-22T20:20:43Z) - SpikingJelly: An open-source machine learning infrastructure platform
for spike-based intelligence [51.6943465041708]
Spiking neural networks (SNNs) aim to realize brain-inspired intelligence on neuromorphic chips with high energy efficiency.
We contribute a full-stack toolkit for pre-processing neuromorphic datasets, building deep SNNs, optimizing their parameters, and deploying SNNs on neuromorphic chips.
arXiv Detail & Related papers (2023-10-25T13:15:17Z) - PC-SNN: Supervised Learning with Local Hebbian Synaptic Plasticity based
on Predictive Coding in Spiking Neural Networks [1.6172800007896282]
We propose a novel learning algorithm inspired by predictive coding theory.
We show that it can perform supervised learning fully autonomously and successfully as the backprop.
This method achieves a favorable performance compared to the state-of-the-art multi-layer SNNs.
arXiv Detail & Related papers (2022-11-24T09:56:02Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Training Spiking Neural Networks with Local Tandem Learning [96.32026780517097]
Spiking neural networks (SNNs) are shown to be more biologically plausible and energy efficient than their predecessors.
In this paper, we put forward a generalized learning rule, termed Local Tandem Learning (LTL)
We demonstrate rapid network convergence within five training epochs on the CIFAR-10 dataset while having low computational complexity.
arXiv Detail & Related papers (2022-10-10T10:05:00Z) - Mapping and Validating a Point Neuron Model on Intel's Neuromorphic
Hardware Loihi [77.34726150561087]
We investigate the potential of Intel's fifth generation neuromorphic chip - Loihi'
Loihi is based on the novel idea of Spiking Neural Networks (SNNs) emulating the neurons in the brain.
We find that Loihi replicates classical simulations very efficiently and scales notably well in terms of both time and energy performance as the networks get larger.
arXiv Detail & Related papers (2021-09-22T16:52:51Z) - The Backpropagation Algorithm Implemented on Spiking Neuromorphic
Hardware [4.3310896118860445]
We present a neuromorphic, spiking backpropagation algorithm based on pulse-gated dynamical information coordination and processing.
We demonstrate a proof-of-principle three-layer circuit that learns to classify digits from the MNIST dataset.
arXiv Detail & Related papers (2021-06-13T15:56:40Z) - On-Chip Error-triggered Learning of Multi-layer Memristive Spiking
Neural Networks [1.7958576850695402]
We propose a local, gradient-based, error-triggered learning algorithm with online ternary weight updates.
The proposed algorithm enables online training of multi-layer SNNs with memristive neuromorphic hardware.
arXiv Detail & Related papers (2020-11-21T19:44:19Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.