Training of mixed-signal optical convolutional neural network with
reduced quantization level
- URL: http://arxiv.org/abs/2008.09206v1
- Date: Thu, 20 Aug 2020 20:46:22 GMT
- Title: Training of mixed-signal optical convolutional neural network with
reduced quantization level
- Authors: Joseph Ulseth, Zheyuan Zhu, Guifang Li, Shuo Pang
- Abstract summary: Mixed-signal artificial neural networks (ANNs) that employ analog matrix-multiplication accelerators can achieve higher speed and improved power efficiency.
Here we report a training method for mixed-signal ANN with two types of errors in its analog signals, random noise, and deterministic errors (distortions)
The results showed that mixed-signal ANNs trained with our proposed method can achieve an equivalent classification accuracy with noise level up to 50% of the ideal quantization step size.
- Score: 1.3381749415517021
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Mixed-signal artificial neural networks (ANNs) that employ analog
matrix-multiplication accelerators can achieve higher speed and improved power
efficiency. Though analog computing is known to be susceptible to noise and
device imperfections, various analog computing paradigms have been considered
as promising solutions to address the growing computing demand in machine
learning applications, thanks to the robustness of ANNs. This robustness has
been explored in low-precision, fixed-point ANN models, which have proven
successful on compressing ANN model size on digital computers. However, these
promising results and network training algorithms cannot be easily migrated to
analog accelerators. The reason is that digital computers typically carry
intermediate results with higher bit width, though the inputs and weights of
each ANN layers are of low bit width; while the analog intermediate results
have low precision, analogous to digital signals with a reduced quantization
level. Here we report a training method for mixed-signal ANN with two types of
errors in its analog signals, random noise, and deterministic errors
(distortions). The results showed that mixed-signal ANNs trained with our
proposed method can achieve an equivalent classification accuracy with noise
level up to 50% of the ideal quantization step size. We have demonstrated this
training method on a mixed-signal optical convolutional neural network based on
diffractive optics.
Related papers
- Neuromorphic Wireless Split Computing with Multi-Level Spikes [69.73249913506042]
In neuromorphic computing, spiking neural networks (SNNs) perform inference tasks, offering significant efficiency gains for workloads involving sequential data.
Recent advances in hardware and software have demonstrated that embedding a few bits of payload in each spike exchanged between the spiking neurons can further enhance inference accuracy.
This paper investigates a wireless neuromorphic split computing architecture employing multi-level SNNs.
arXiv Detail & Related papers (2024-11-07T14:08:35Z) - Quantization of Deep Neural Networks to facilitate self-correction of
weights on Phase Change Memory-based analog hardware [0.0]
We develop an algorithm to approximate a set of multiplicative weights.
These weights aim to represent the original network's weights with minimal loss in performance.
Our results demonstrate that, when paired with an on-chip pulse generator, our self-correcting neural network performs comparably to those trained with analog-aware algorithms.
arXiv Detail & Related papers (2023-09-30T10:47:25Z) - Gradient-descent hardware-aware training and deployment for mixed-signal
Neuromorphic processors [2.812395851874055]
Mixed-signal neuromorphic processors provide extremely low-power operation for edge inference workloads.
We demonstrate a novel methodology for ofDine training and deployment of spiking neural networks (SNNs) to the mixed-signal neuromorphic processor DYNAP-SE2.
arXiv Detail & Related papers (2023-03-14T08:56:54Z) - CorrectNet: Robustness Enhancement of Analog In-Memory Computing for
Neural Networks by Error Suppression and Compensation [4.570841222958966]
We propose a framework to enhance the robustness of neural networks under variations and noise.
We show that inference accuracy of neural networks can be recovered from as low as 1.69% under variations and noise.
arXiv Detail & Related papers (2022-11-27T19:13:33Z) - Signal Detection in MIMO Systems with Hardware Imperfections: Message
Passing on Neural Networks [101.59367762974371]
In this paper, we investigate signal detection in multiple-input-multiple-output (MIMO) communication systems with hardware impairments.
It is difficult to train a deep neural network (DNN) with limited pilot signals, hindering its practical applications.
We design an efficient message passing based Bayesian signal detector, leveraging the unitary approximate message passing (UAMP) algorithm.
arXiv Detail & Related papers (2022-10-08T04:32:58Z) - Mixed Precision Low-bit Quantization of Neural Network Language Models
for Speech Recognition [67.95996816744251]
State-of-the-art language models (LMs) represented by long-short term memory recurrent neural networks (LSTM-RNNs) and Transformers are becoming increasingly complex and expensive for practical applications.
Current quantization methods are based on uniform precision and fail to account for the varying performance sensitivity at different parts of LMs to quantization errors.
Novel mixed precision neural network LM quantization methods are proposed in this paper.
arXiv Detail & Related papers (2021-11-29T12:24:02Z) - Low-bit Quantization of Recurrent Neural Network Language Models Using
Alternating Direction Methods of Multipliers [67.688697838109]
This paper presents a novel method to train quantized RNNLMs from scratch using alternating direction methods of multipliers (ADMM)
Experiments on two tasks suggest the proposed ADMM quantization achieved a model size compression factor of up to 31 times over the full precision baseline RNNLMs.
arXiv Detail & Related papers (2021-11-29T09:30:06Z) - SignalNet: A Low Resolution Sinusoid Decomposition and Estimation
Network [79.04274563889548]
We propose SignalNet, a neural network architecture that detects the number of sinusoids and estimates their parameters from quantized in-phase and quadrature samples.
We introduce a worst-case learning threshold for comparing the results of our network relative to the underlying data distributions.
In simulation, we find that our algorithm is always able to surpass the threshold for three-bit data but often cannot exceed the threshold for one-bit data.
arXiv Detail & Related papers (2021-06-10T04:21:20Z) - Supervised training of spiking neural networks for robust deployment on
mixed-signal neuromorphic processors [2.6949002029513167]
Mixed-signal analog/digital electronic circuits can emulate spiking neurons and synapses with extremely high energy efficiency.
Mismatch is expressed as differences in effective parameters between identically-configured neurons and synapses.
We present a supervised learning approach that addresses this challenge by maximizing robustness to mismatch and other common sources of noise.
arXiv Detail & Related papers (2021-02-12T09:20:49Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Inference with Artificial Neural Networks on Analog Neuromorphic
Hardware [0.0]
BrainScaleS-2 ASIC comprises mixed-signal neurons and synapse circuits.
System can also operate in a vector-matrix multiplication and accumulation mode for artificial neural networks.
arXiv Detail & Related papers (2020-06-23T17:25:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.