P-CRITICAL: A Reservoir Autoregulation Plasticity Rule for Neuromorphic
Hardware
- URL: http://arxiv.org/abs/2009.05593v1
- Date: Fri, 11 Sep 2020 18:13:03 GMT
- Title: P-CRITICAL: A Reservoir Autoregulation Plasticity Rule for Neuromorphic
Hardware
- Authors: Ismael Balafrej and Jean Rouat
- Abstract summary: Backpropagation algorithms on recurrent artificial neural networks require an unfolding of accumulated states over time.
We propose a new local plasticity rule named P-CRITICAL designed for automatic reservoir tuning.
We observe an improved performance on tasks coming from various modalities without the need to tune parameters.
- Score: 4.416484585765027
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Backpropagation algorithms on recurrent artificial neural networks require an
unfolding of accumulated states over time. These states must be kept in memory
for an undefined period of time which is task-dependent. This paper uses the
reservoir computing paradigm where an untrained recurrent neural network layer
is used as a preprocessor stage to learn temporal and limited data. These
so-called reservoirs require either extensive fine-tuning or neuroplasticity
with unsupervised learning rules. We propose a new local plasticity rule named
P-CRITICAL designed for automatic reservoir tuning that translates well to
Intel's Loihi research chip, a recent neuromorphic processor. We compare our
approach on well-known datasets from the machine learning community while using
a spiking neuronal architecture. We observe an improved performance on tasks
coming from various modalities without the need to tune parameters. Such
algorithms could be a key to end-to-end energy-efficient neuromorphic-based
machine learning on edge devices.
Related papers
- SpikingJelly: An open-source machine learning infrastructure platform
for spike-based intelligence [51.6943465041708]
Spiking neural networks (SNNs) aim to realize brain-inspired intelligence on neuromorphic chips with high energy efficiency.
We contribute a full-stack toolkit for pre-processing neuromorphic datasets, building deep SNNs, optimizing their parameters, and deploying SNNs on neuromorphic chips.
arXiv Detail & Related papers (2023-10-25T13:15:17Z) - Neuromorphic analog circuits for robust on-chip always-on learning in
spiking neural networks [1.9809266426888898]
Mixed-signal neuromorphic systems represent a promising solution for solving extreme-edge computing tasks.
Their spiking neural network circuits are optimized for processing sensory data on-line in continuous-time.
We design on-chip learning circuits with short-term analog dynamics and long-term tristate discretization mechanisms.
arXiv Detail & Related papers (2023-07-12T11:14:25Z) - ETLP: Event-based Three-factor Local Plasticity for online learning with
neuromorphic hardware [105.54048699217668]
We show a competitive performance in accuracy with a clear advantage in the computational complexity for Event-Based Three-factor Local Plasticity (ETLP)
We also show that when using local plasticity, threshold adaptation in spiking neurons and a recurrent topology are necessary to learntemporal patterns with a rich temporal structure.
arXiv Detail & Related papers (2023-01-19T19:45:42Z) - Sequence learning in a spiking neuronal network with memristive synapses [0.0]
A core concept that lies at the heart of brain computation is sequence learning and prediction.
Neuromorphic hardware emulates the way the brain processes information and maps neurons and synapses directly into a physical substrate.
We study the feasibility of using ReRAM devices as a replacement of the biological synapses in the sequence learning model.
arXiv Detail & Related papers (2022-11-29T21:07:23Z) - Braille Letter Reading: A Benchmark for Spatio-Temporal Pattern
Recognition on Neuromorphic Hardware [50.380319968947035]
Recent deep learning approaches have reached accuracy in such tasks, but their implementation on conventional embedded solutions is still computationally very and energy expensive.
We propose a new benchmark for computing tactile pattern recognition at the edge through letters reading.
We trained and compared feed-forward and recurrent spiking neural networks (SNNs) offline using back-propagation through time with surrogate gradients, then we deployed them on the Intel Loihimorphic chip for efficient inference.
Our results show that the LSTM outperforms the recurrent SNN in terms of accuracy by 14%. However, the recurrent SNN on Loihi is 237 times more energy
arXiv Detail & Related papers (2022-05-30T14:30:45Z) - Deep Metric Learning with Locality Sensitive Angular Loss for
Self-Correcting Source Separation of Neural Spiking Signals [77.34726150561087]
We propose a methodology based on deep metric learning to address the need for automated post-hoc cleaning and robust separation filters.
We validate this method with an artificially corrupted label set based on source-separated high-density surface electromyography recordings.
This approach enables a neural network to learn to accurately decode neurophysiological time series using any imperfect method of labelling the signal.
arXiv Detail & Related papers (2021-10-13T21:51:56Z) - Mapping and Validating a Point Neuron Model on Intel's Neuromorphic
Hardware Loihi [77.34726150561087]
We investigate the potential of Intel's fifth generation neuromorphic chip - Loihi'
Loihi is based on the novel idea of Spiking Neural Networks (SNNs) emulating the neurons in the brain.
We find that Loihi replicates classical simulations very efficiently and scales notably well in terms of both time and energy performance as the networks get larger.
arXiv Detail & Related papers (2021-09-22T16:52:51Z) - The Backpropagation Algorithm Implemented on Spiking Neuromorphic
Hardware [4.3310896118860445]
We present a neuromorphic, spiking backpropagation algorithm based on pulse-gated dynamical information coordination and processing.
We demonstrate a proof-of-principle three-layer circuit that learns to classify digits from the MNIST dataset.
arXiv Detail & Related papers (2021-06-13T15:56:40Z) - Reservoir Stack Machines [77.12475691708838]
Memory-augmented neural networks equip a recurrent neural network with an explicit memory to support tasks that require information storage.
We introduce the reservoir stack machine, a model which can provably recognize all deterministic context-free languages.
Our results show that the reservoir stack machine achieves zero error, even on test sequences longer than the training data.
arXiv Detail & Related papers (2021-05-04T16:50:40Z) - Neuromorphic Algorithm-hardware Codesign for Temporal Pattern Learning [11.781094547718595]
We derive an efficient training algorithm for Leaky Integrate and Fire neurons, which is capable of training a SNN to learn complex spatial temporal patterns.
We have developed a CMOS circuit implementation for a memristor-based network of neuron and synapses which retains critical neural dynamics with reduced complexity.
arXiv Detail & Related papers (2021-04-21T18:23:31Z) - Structural plasticity on an accelerated analog neuromorphic hardware
system [0.46180371154032884]
We present a strategy to achieve structural plasticity by constantly rewiring the pre- and gpostsynaptic partners.
We implemented this algorithm on the analog neuromorphic system BrainScaleS-2.
We evaluated our implementation in a simple supervised learning scenario, showing its ability to optimize the network topology.
arXiv Detail & Related papers (2019-12-27T10:15:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.