Supervised training of spiking neural networks for robust deployment on
mixed-signal neuromorphic processors
- URL: http://arxiv.org/abs/2102.06408v1
- Date: Fri, 12 Feb 2021 09:20:49 GMT
- Title: Supervised training of spiking neural networks for robust deployment on
mixed-signal neuromorphic processors
- Authors: Julian B\"uchel, Dmitrii Zendrikov, Sergio Solinas, Giacomo Indiveri,
Dylan R. Muir
- Abstract summary: Mixed-signal analog/digital electronic circuits can emulate spiking neurons and synapses with extremely high energy efficiency.
Mismatch is expressed as differences in effective parameters between identically-configured neurons and synapses.
We present a supervised learning approach that addresses this challenge by maximizing robustness to mismatch and other common sources of noise.
- Score: 2.6949002029513167
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Mixed-signal analog/digital electronic circuits can emulate spiking neurons
and synapses with extremely high energy efficiency, following an approach known
as "neuromorphic engineering". However, analog circuits are sensitive to
variation in fabrication among transistors in a chip ("device mismatch"). In
the case of neuromorphic implementation of Spiking Neural Networks (SNNs),
mismatch is expressed as differences in effective parameters between
identically-configured neurons and synapses. Each fabricated chip therefore
provides a different distribution of parameters such as time constants or
synaptic weights. Without the expensive overhead in terms of area and power of
extra on-chip learning or calibration circuits, device mismatch and other noise
sources represent a critical challenge for the deployment of pre-trained neural
network chips. Here we present a supervised learning approach that addresses
this challenge by maximizing robustness to mismatch and other common sources of
noise.
The proposed method trains (SNNs) to perform temporal classification tasks by
mimicking a pre-trained dynamical system, using a local learning rule adapted
from non-linear control theory. We demonstrate the functionality of our model
on two tasks that require memory to perform successfully, and measure the
robustness of our approach to several forms of noise and variability present in
the network. We show that our approach is more robust than several common
alternative approaches for training SNNs.
Our method provides a viable way to robustly deploy pre-trained networks on
mixed-signal neuromorphic hardware, without requiring per-device training or
calibration.
Related papers
- Neuromorphic Wireless Split Computing with Multi-Level Spikes [69.73249913506042]
In neuromorphic computing, spiking neural networks (SNNs) perform inference tasks, offering significant efficiency gains for workloads involving sequential data.
Recent advances in hardware and software have demonstrated that embedding a few bits of payload in each spike exchanged between the spiking neurons can further enhance inference accuracy.
This paper investigates a wireless neuromorphic split computing architecture employing multi-level SNNs.
arXiv Detail & Related papers (2024-11-07T14:08:35Z) - Enhancing lattice kinetic schemes for fluid dynamics with Lattice-Equivariant Neural Networks [79.16635054977068]
We present a new class of equivariant neural networks, dubbed Lattice-Equivariant Neural Networks (LENNs)
Our approach develops within a recently introduced framework aimed at learning neural network-based surrogate models Lattice Boltzmann collision operators.
Our work opens towards practical utilization of machine learning-augmented Lattice Boltzmann CFD in real-world simulations.
arXiv Detail & Related papers (2024-05-22T17:23:15Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Neuromorphic analog circuits for robust on-chip always-on learning in
spiking neural networks [1.9809266426888898]
Mixed-signal neuromorphic systems represent a promising solution for solving extreme-edge computing tasks.
Their spiking neural network circuits are optimized for processing sensory data on-line in continuous-time.
We design on-chip learning circuits with short-term analog dynamics and long-term tristate discretization mechanisms.
arXiv Detail & Related papers (2023-07-12T11:14:25Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Gradient-descent hardware-aware training and deployment for mixed-signal
Neuromorphic processors [2.812395851874055]
Mixed-signal neuromorphic processors provide extremely low-power operation for edge inference workloads.
We demonstrate a novel methodology for ofDine training and deployment of spiking neural networks (SNNs) to the mixed-signal neuromorphic processor DYNAP-SE2.
arXiv Detail & Related papers (2023-03-14T08:56:54Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Neural Network Training with Asymmetric Crosspoint Elements [1.0773924713784704]
asymmetric conductance modulation of practical resistive devices critically degrades the classification of networks trained with conventional algorithms.
Here, we describe and experimentally demonstrate an alternative fully-parallel training algorithm: Hamiltonian Descent.
We provide critical intuition on why device asymmetry is fundamentally incompatible with conventional training algorithms and how the new approach exploits it as a useful feature instead.
arXiv Detail & Related papers (2022-01-31T17:41:36Z) - An error-propagation spiking neural network compatible with neuromorphic
processors [2.432141667343098]
We present a spike-based learning method that approximates back-propagation using local weight update mechanisms.
We introduce a network architecture that enables synaptic weight update mechanisms to back-propagate error signals.
This work represents a first step towards the design of ultra-low power mixed-signal neuromorphic processing systems.
arXiv Detail & Related papers (2021-04-12T07:21:08Z) - Inference with Artificial Neural Networks on Analog Neuromorphic
Hardware [0.0]
BrainScaleS-2 ASIC comprises mixed-signal neurons and synapse circuits.
System can also operate in a vector-matrix multiplication and accumulation mode for artificial neural networks.
arXiv Detail & Related papers (2020-06-23T17:25:06Z) - Training End-to-End Analog Neural Networks with Equilibrium Propagation [64.0476282000118]
We introduce a principled method to train end-to-end analog neural networks by gradient descent.
We show mathematically that a class of analog neural networks (called nonlinear resistive networks) are energy-based models.
Our work can guide the development of a new generation of ultra-fast, compact and low-power neural networks supporting on-chip learning.
arXiv Detail & Related papers (2020-06-02T23:38:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.