Implementing efficient balanced networks with mixed-signal spike-based
learning circuits
- URL: http://arxiv.org/abs/2010.14353v2
- Date: Fri, 12 Feb 2021 09:26:16 GMT
- Title: Implementing efficient balanced networks with mixed-signal spike-based
learning circuits
- Authors: Julian B\"uchel, Jonathan Kakon, Michel Perez, Giacomo Indiveri
- Abstract summary: Efficient Balanced Networks (EBNs) are networks of spiking neurons in which excitatory and inhibitory synaptic currents are balanced on a short timescale.
We develop a novel local learning rule suitable for on-chip implementation that drives a randomly connected network of spiking neurons into a tightly balanced regime.
Thanks to their coding properties and sparse activity, neuromorphic electronic EBNs will be ideally suited for extreme-edge computing applications.
- Score: 2.1640200483378953
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Efficient Balanced Networks (EBNs) are networks of spiking neurons in which
excitatory and inhibitory synaptic currents are balanced on a short timescale,
leading to desirable coding properties such as high encoding precision, low
firing rates, and distributed information representation. It is for these
benefits that it would be desirable to implement such networks in low-power
neuromorphic processors. However, the degree of device mismatch in analog
mixed-signal neuromorphic circuits renders the use of pre-trained EBNs
challenging, if not impossible. To overcome this issue, we developed a novel
local learning rule suitable for on-chip implementation that drives a randomly
connected network of spiking neurons into a tightly balanced regime. Here we
present the integrated circuits that implement this rule and demonstrate their
expected behaviour in low-level circuit simulations. Our proposed method paves
the way towards a system-level implementation of tightly balanced networks on
analog mixed-signal neuromorphic hardware. Thanks to their coding properties
and sparse activity, neuromorphic electronic EBNs will be ideally suited for
extreme-edge computing applications that require low-latency, ultra-low power
consumption and which cannot rely on cloud computing for data processing.
Related papers
- Advancing Spatio-Temporal Processing in Spiking Neural Networks through Adaptation [6.233189707488025]
In this article, we analyze the dynamical, computational, and learning properties of adaptive LIF neurons and networks thereof.
We show that the superiority of networks of adaptive LIF neurons extends to the prediction and generation of complex time series.
arXiv Detail & Related papers (2024-08-14T12:49:58Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Gradient-descent hardware-aware training and deployment for mixed-signal
Neuromorphic processors [2.812395851874055]
Mixed-signal neuromorphic processors provide extremely low-power operation for edge inference workloads.
We demonstrate a novel methodology for ofDine training and deployment of spiking neural networks (SNNs) to the mixed-signal neuromorphic processor DYNAP-SE2.
arXiv Detail & Related papers (2023-03-14T08:56:54Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Training Spiking Neural Networks with Local Tandem Learning [96.32026780517097]
Spiking neural networks (SNNs) are shown to be more biologically plausible and energy efficient than their predecessors.
In this paper, we put forward a generalized learning rule, termed Local Tandem Learning (LTL)
We demonstrate rapid network convergence within five training epochs on the CIFAR-10 dataset while having low computational complexity.
arXiv Detail & Related papers (2022-10-10T10:05:00Z) - Signal Detection in MIMO Systems with Hardware Imperfections: Message
Passing on Neural Networks [101.59367762974371]
In this paper, we investigate signal detection in multiple-input-multiple-output (MIMO) communication systems with hardware impairments.
It is difficult to train a deep neural network (DNN) with limited pilot signals, hindering its practical applications.
We design an efficient message passing based Bayesian signal detector, leveraging the unitary approximate message passing (UAMP) algorithm.
arXiv Detail & Related papers (2022-10-08T04:32:58Z) - Neural net modeling of equilibria in NSTX-U [0.0]
We develop two neural networks relevant to equilibrium and shape control modeling.
Networks include Eqnet, a free-boundary equilibrium solver trained on the EFIT01 reconstruction algorithm, and Pertnet, which is trained on the Gspert code.
We report strong performance for both networks indicating that these models could reliably be used within closed-loop simulations.
arXiv Detail & Related papers (2022-02-28T16:09:58Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Ultra-Low-Power FDSOI Neural Circuits for Extreme-Edge Neuromorphic
Intelligence [2.6199663901387997]
In-memory computing mixed-signal neuromorphic architectures provide promising ultra-low-power solutions for edge-computing sensory-processing applications.
We present a set of mixed-signal analog/digital circuits that exploit the features of advanced Fully-Depleted Silicon on Insulator (FDSOI) integration processes.
arXiv Detail & Related papers (2020-06-25T09:31:29Z) - Training End-to-End Analog Neural Networks with Equilibrium Propagation [64.0476282000118]
We introduce a principled method to train end-to-end analog neural networks by gradient descent.
We show mathematically that a class of analog neural networks (called nonlinear resistive networks) are energy-based models.
Our work can guide the development of a new generation of ultra-fast, compact and low-power neural networks supporting on-chip learning.
arXiv Detail & Related papers (2020-06-02T23:38:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.