Gradient-descent hardware-aware training and deployment for mixed-signal
Neuromorphic processors
- URL: http://arxiv.org/abs/2303.12167v2
- Date: Thu, 15 Feb 2024 04:00:02 GMT
- Title: Gradient-descent hardware-aware training and deployment for mixed-signal
Neuromorphic processors
- Authors: U\u{g}urcan \c{C}akal, Maryada, Chenxi Wu, Ilkay Ulusoy, Dylan R. Muir
- Abstract summary: Mixed-signal neuromorphic processors provide extremely low-power operation for edge inference workloads.
We demonstrate a novel methodology for ofDine training and deployment of spiking neural networks (SNNs) to the mixed-signal neuromorphic processor DYNAP-SE2.
- Score: 2.812395851874055
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Mixed-signal neuromorphic processors provide extremely low-power operation
for edge inference workloads, taking advantage of sparse asynchronous
computation within Spiking Neural Networks (SNNs). However, deploying robust
applications to these devices is complicated by limited controllability over
analog hardware parameters, as well as unintended parameter and dynamical
variations of analog circuits due to fabrication non-idealities. Here we
demonstrate a novel methodology for ofDine training and deployment of spiking
neural networks (SNNs) to the mixed-signal neuromorphic processor DYNAP-SE2.
The methodology utilizes gradient-based training using a differentiable
simulation of the mixed-signal device, coupled with an unsupervised weight
quantization method to optimize the network's parameters. Parameter noise
injection during training provides robustness to the effects of quantization
and device mismatch, making the method a promising candidate for real-world
applications under hardware constraints and non-idealities. This work extends
Rockpool, an open-source deep-learning library for SNNs, with support for
accurate simulation of mixed-signal SNN dynamics. Our approach simplifies the
development and deployment process for the neuromorphic community, making
mixed-signal neuromorphic processors more accessible to researchers and
developers.
Related papers
- Neuromorphic Wireless Split Computing with Multi-Level Spikes [69.73249913506042]
In neuromorphic computing, spiking neural networks (SNNs) perform inference tasks, offering significant efficiency gains for workloads involving sequential data.
Recent advances in hardware and software have demonstrated that embedding a few bits of payload in each spike exchanged between the spiking neurons can further enhance inference accuracy.
This paper investigates a wireless neuromorphic split computing architecture employing multi-level SNNs.
arXiv Detail & Related papers (2024-11-07T14:08:35Z) - Genetic Motifs as a Blueprint for Mismatch-Tolerant Neuromorphic Computing [1.8292454465322363]
Mixed-signal implementations of SNNs offer a promising solution to edge computing applications.
Device mismatch in the analog circuits of these neuromorphic processors poses a significant challenge to the deployment of robust processing.
We introduce a novel architectural solution inspired by biological development to address this issue.
arXiv Detail & Related papers (2024-10-25T09:04:50Z) - A Realistic Simulation Framework for Analog/Digital Neuromorphic Architectures [73.65190161312555]
ARCANA is a spiking neural network simulator designed to account for the properties of mixed-signal neuromorphic circuits.
We show how the results obtained provide a reliable estimate of the behavior of the spiking neural network trained in software.
arXiv Detail & Related papers (2024-09-23T11:16:46Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Signal Detection in MIMO Systems with Hardware Imperfections: Message
Passing on Neural Networks [101.59367762974371]
In this paper, we investigate signal detection in multiple-input-multiple-output (MIMO) communication systems with hardware impairments.
It is difficult to train a deep neural network (DNN) with limited pilot signals, hindering its practical applications.
We design an efficient message passing based Bayesian signal detector, leveraging the unitary approximate message passing (UAMP) algorithm.
arXiv Detail & Related papers (2022-10-08T04:32:58Z) - Mixed Precision Low-bit Quantization of Neural Network Language Models
for Speech Recognition [67.95996816744251]
State-of-the-art language models (LMs) represented by long-short term memory recurrent neural networks (LSTM-RNNs) and Transformers are becoming increasingly complex and expensive for practical applications.
Current quantization methods are based on uniform precision and fail to account for the varying performance sensitivity at different parts of LMs to quantization errors.
Novel mixed precision neural network LM quantization methods are proposed in this paper.
arXiv Detail & Related papers (2021-11-29T12:24:02Z) - Supervised training of spiking neural networks for robust deployment on
mixed-signal neuromorphic processors [2.6949002029513167]
Mixed-signal analog/digital electronic circuits can emulate spiking neurons and synapses with extremely high energy efficiency.
Mismatch is expressed as differences in effective parameters between identically-configured neurons and synapses.
We present a supervised learning approach that addresses this challenge by maximizing robustness to mismatch and other common sources of noise.
arXiv Detail & Related papers (2021-02-12T09:20:49Z) - Large-scale Neural Solvers for Partial Differential Equations [48.7576911714538]
Solving partial differential equations (PDE) is an indispensable part of many branches of science as many processes can be modelled in terms of PDEs.
Recent numerical solvers require manual discretization of the underlying equation as well as sophisticated, tailored code for distributed computing.
We examine the applicability of continuous, mesh-free neural solvers for partial differential equations, physics-informed neural networks (PINNs)
We discuss the accuracy of GatedPINN with respect to analytical solutions -- as well as state-of-the-art numerical solvers, such as spectral solvers.
arXiv Detail & Related papers (2020-09-08T13:26:51Z) - Training of mixed-signal optical convolutional neural network with
reduced quantization level [1.3381749415517021]
Mixed-signal artificial neural networks (ANNs) that employ analog matrix-multiplication accelerators can achieve higher speed and improved power efficiency.
Here we report a training method for mixed-signal ANN with two types of errors in its analog signals, random noise, and deterministic errors (distortions)
The results showed that mixed-signal ANNs trained with our proposed method can achieve an equivalent classification accuracy with noise level up to 50% of the ideal quantization step size.
arXiv Detail & Related papers (2020-08-20T20:46:22Z) - Ultra-Low-Power FDSOI Neural Circuits for Extreme-Edge Neuromorphic
Intelligence [2.6199663901387997]
In-memory computing mixed-signal neuromorphic architectures provide promising ultra-low-power solutions for edge-computing sensory-processing applications.
We present a set of mixed-signal analog/digital circuits that exploit the features of advanced Fully-Depleted Silicon on Insulator (FDSOI) integration processes.
arXiv Detail & Related papers (2020-06-25T09:31:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.