Gradient-descent hardware-aware training and deployment for mixed-signal
Neuromorphic processors
- URL: http://arxiv.org/abs/2303.12167v2
- Date: Thu, 15 Feb 2024 04:00:02 GMT
- Title: Gradient-descent hardware-aware training and deployment for mixed-signal
Neuromorphic processors
- Authors: U\u{g}urcan \c{C}akal, Maryada, Chenxi Wu, Ilkay Ulusoy, Dylan R. Muir
- Abstract summary: Mixed-signal neuromorphic processors provide extremely low-power operation for edge inference workloads.
We demonstrate a novel methodology for ofDine training and deployment of spiking neural networks (SNNs) to the mixed-signal neuromorphic processor DYNAP-SE2.
- Score: 2.812395851874055
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Mixed-signal neuromorphic processors provide extremely low-power operation
for edge inference workloads, taking advantage of sparse asynchronous
computation within Spiking Neural Networks (SNNs). However, deploying robust
applications to these devices is complicated by limited controllability over
analog hardware parameters, as well as unintended parameter and dynamical
variations of analog circuits due to fabrication non-idealities. Here we
demonstrate a novel methodology for ofDine training and deployment of spiking
neural networks (SNNs) to the mixed-signal neuromorphic processor DYNAP-SE2.
The methodology utilizes gradient-based training using a differentiable
simulation of the mixed-signal device, coupled with an unsupervised weight
quantization method to optimize the network's parameters. Parameter noise
injection during training provides robustness to the effects of quantization
and device mismatch, making the method a promising candidate for real-world
applications under hardware constraints and non-idealities. This work extends
Rockpool, an open-source deep-learning library for SNNs, with support for
accurate simulation of mixed-signal SNN dynamics. Our approach simplifies the
development and deployment process for the neuromorphic community, making
mixed-signal neuromorphic processors more accessible to researchers and
developers.
Related papers
- Deep-Unrolling Multidimensional Harmonic Retrieval Algorithms on Neuromorphic Hardware [78.17783007774295]
This paper explores the potential of conversion-based neuromorphic algorithms for highly accurate and energy-efficient single-snapshot multidimensional harmonic retrieval.
A novel method for converting the complex-valued convolutional layers and activations into spiking neural networks (SNNs) is developed.
The converted SNNs achieve almost five-fold power efficiency at moderate performance loss compared to the original CNNs.
arXiv Detail & Related papers (2024-12-05T09:41:33Z) - Neuromorphic Wireless Split Computing with Multi-Level Spikes [69.73249913506042]
Neuromorphic computing uses spiking neural networks (SNNs) to perform inference tasks.
embedding a small payload within each spike exchanged between spiking neurons can enhance inference accuracy without increasing energy consumption.
split computing - where an SNN is partitioned across two devices - is a promising solution.
This paper presents the first comprehensive study of a neuromorphic wireless split computing architecture that employs multi-level SNNs.
arXiv Detail & Related papers (2024-11-07T14:08:35Z) - Genetic Motifs as a Blueprint for Mismatch-Tolerant Neuromorphic Computing [1.8292454465322363]
Mixed-signal implementations of SNNs offer a promising solution to edge computing applications.
Device mismatch in the analog circuits of these neuromorphic processors poses a significant challenge to the deployment of robust processing.
We introduce a novel architectural solution inspired by biological development to address this issue.
arXiv Detail & Related papers (2024-10-25T09:04:50Z) - A Realistic Simulation Framework for Analog/Digital Neuromorphic Architectures [73.65190161312555]
ARCANA is a spiking neural network simulator designed to account for the properties of mixed-signal neuromorphic circuits.
We show how the results obtained provide a reliable estimate of the behavior of the spiking neural network trained in software.
arXiv Detail & Related papers (2024-09-23T11:16:46Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Signal Detection in MIMO Systems with Hardware Imperfections: Message
Passing on Neural Networks [101.59367762974371]
In this paper, we investigate signal detection in multiple-input-multiple-output (MIMO) communication systems with hardware impairments.
It is difficult to train a deep neural network (DNN) with limited pilot signals, hindering its practical applications.
We design an efficient message passing based Bayesian signal detector, leveraging the unitary approximate message passing (UAMP) algorithm.
arXiv Detail & Related papers (2022-10-08T04:32:58Z) - Mixed Precision Low-bit Quantization of Neural Network Language Models
for Speech Recognition [67.95996816744251]
State-of-the-art language models (LMs) represented by long-short term memory recurrent neural networks (LSTM-RNNs) and Transformers are becoming increasingly complex and expensive for practical applications.
Current quantization methods are based on uniform precision and fail to account for the varying performance sensitivity at different parts of LMs to quantization errors.
Novel mixed precision neural network LM quantization methods are proposed in this paper.
arXiv Detail & Related papers (2021-11-29T12:24:02Z) - Supervised training of spiking neural networks for robust deployment on
mixed-signal neuromorphic processors [2.6949002029513167]
Mixed-signal analog/digital electronic circuits can emulate spiking neurons and synapses with extremely high energy efficiency.
Mismatch is expressed as differences in effective parameters between identically-configured neurons and synapses.
We present a supervised learning approach that addresses this challenge by maximizing robustness to mismatch and other common sources of noise.
arXiv Detail & Related papers (2021-02-12T09:20:49Z) - Training of mixed-signal optical convolutional neural network with
reduced quantization level [1.3381749415517021]
Mixed-signal artificial neural networks (ANNs) that employ analog matrix-multiplication accelerators can achieve higher speed and improved power efficiency.
Here we report a training method for mixed-signal ANN with two types of errors in its analog signals, random noise, and deterministic errors (distortions)
The results showed that mixed-signal ANNs trained with our proposed method can achieve an equivalent classification accuracy with noise level up to 50% of the ideal quantization step size.
arXiv Detail & Related papers (2020-08-20T20:46:22Z) - Ultra-Low-Power FDSOI Neural Circuits for Extreme-Edge Neuromorphic
Intelligence [2.6199663901387997]
In-memory computing mixed-signal neuromorphic architectures provide promising ultra-low-power solutions for edge-computing sensory-processing applications.
We present a set of mixed-signal analog/digital circuits that exploit the features of advanced Fully-Depleted Silicon on Insulator (FDSOI) integration processes.
arXiv Detail & Related papers (2020-06-25T09:31:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.