Fast Exploration of the Impact of Precision Reduction on Spiking Neural
Networks
- URL: http://arxiv.org/abs/2212.11782v1
- Date: Tue, 22 Nov 2022 15:08:05 GMT
- Title: Fast Exploration of the Impact of Precision Reduction on Spiking Neural
Networks
- Authors: Sepide Saeedi, Alessio Carpegna, Alessandro Savino and Stefano Di
Carlo
- Abstract summary: Spiking Neural Networks (SNNs) are a practical choice when the target hardware reaches the edge of computing.
We employ an Interval Arithmetic (IA) model to develop an exploration methodology that takes advantage of the capability of such a model to propagate the approximation error.
- Score: 63.614519238823206
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Approximate Computing (AxC) techniques trade off the computation accuracy for
performance, energy, and area reduction gains. The trade-off is particularly
convenient when the applications are intrinsically tolerant to some accuracy
loss, as in the Spiking Neural Networks (SNNs) case. SNNs are a practical
choice when the target hardware reaches the edge of computing, but this
requires some area minimization strategies. In this work, we employ an Interval
Arithmetic (IA) model to develop an exploration methodology that takes
advantage of the capability of such a model to propagate the approximation
error to detect when the approximation exceeds tolerable limits by the
application. Experimental results confirm the capability of reducing the
exploration time significantly, providing the chance to reduce the network
parameters' size further and with more fine-grained results.
Related papers
- Pruning Convolutional Filters via Reinforcement Learning with Entropy
Minimization [0.0]
We introduce a novel information-theoretic reward function which minimizes the spatial entropy of convolutional activations.
Our method shows that there is another possibility to preserve accuracy without the need to directly optimize it in the agent's reward function.
arXiv Detail & Related papers (2023-12-08T09:34:57Z) - Guaranteed Approximation Bounds for Mixed-Precision Neural Operators [83.64404557466528]
We build on intuition that neural operator learning inherently induces an approximation error.
We show that our approach reduces GPU memory usage by up to 50% and improves throughput by 58% with little or no reduction in accuracy.
arXiv Detail & Related papers (2023-07-27T17:42:06Z) - Learning k-Level Structured Sparse Neural Networks Using Group Envelope Regularization [4.0554893636822]
We introduce a novel approach to deploy large-scale Deep Neural Networks on constrained resources.
The method speeds up inference time and aims to reduce memory demand and power consumption.
arXiv Detail & Related papers (2022-12-25T15:40:05Z) - Navigating Local Minima in Quantized Spiking Neural Networks [3.1351527202068445]
Spiking and Quantized Neural Networks (NNs) are becoming exceedingly important for hyper-efficient implementations of Deep Learning (DL) algorithms.
These networks face challenges when trained using error backpropagation, due to the absence of gradient signals when applying hard thresholds.
This paper presents a systematic evaluation of a cosine-annealed LR schedule coupled with weight-independent adaptive moment estimation.
arXiv Detail & Related papers (2022-02-15T06:42:25Z) - Convolutional generative adversarial imputation networks for
spatio-temporal missing data in storm surge simulations [86.5302150777089]
Generative Adversarial Imputation Nets (GANs) and GAN-based techniques have attracted attention as unsupervised machine learning methods.
We name our proposed method as Con Conval Generative Adversarial Imputation Nets (Conv-GAIN)
arXiv Detail & Related papers (2021-11-03T03:50:48Z) - FasterPose: A Faster Simple Baseline for Human Pose Estimation [65.8413964785972]
We propose a design paradigm for cost-effective network with LR representation for efficient pose estimation, named FasterPose.
We study the training behavior of FasterPose, and formulate a novel regressive cross-entropy (RCE) loss function for accelerating the convergence.
Compared with the previously dominant network of pose estimation, our method reduces 58% of the FLOPs and simultaneously gains 1.3% improvement of accuracy.
arXiv Detail & Related papers (2021-07-07T13:39:08Z) - SignalNet: A Low Resolution Sinusoid Decomposition and Estimation
Network [79.04274563889548]
We propose SignalNet, a neural network architecture that detects the number of sinusoids and estimates their parameters from quantized in-phase and quadrature samples.
We introduce a worst-case learning threshold for comparing the results of our network relative to the underlying data distributions.
In simulation, we find that our algorithm is always able to surpass the threshold for three-bit data but often cannot exceed the threshold for one-bit data.
arXiv Detail & Related papers (2021-06-10T04:21:20Z) - From DNNs to GANs: Review of efficient hardware architectures for deep
learning [0.0]
Neural network and deep learning has been started to impact the present research paradigm.
DSP processors are incapable of performing neural network, activation function, convolutional neural network and generative adversarial network operations.
Different algorithms have been adapted to design a DSP processor compatible for fast performance in neural network, activation function, convolutional neural network and generative adversarial network.
arXiv Detail & Related papers (2021-06-06T13:23:06Z) - AQD: Towards Accurate Fully-Quantized Object Detection [94.06347866374927]
We propose an Accurate Quantized object Detection solution, termed AQD, to get rid of floating-point computation.
Our AQD achieves comparable or even better performance compared with the full-precision counterpart under extremely low-bit schemes.
arXiv Detail & Related papers (2020-07-14T09:07:29Z) - Explicitly Trained Spiking Sparsity in Spiking Neural Networks with
Backpropagation [7.952659059689134]
Spiking Neural Networks (SNNs) are being explored for their potential energy efficiency resulting from sparse, event-driven computations.
We propose an explicit inclusion of spike counts in the loss function, along with a traditional error loss, to optimize weight parameters for both accuracy and spiking sparsity.
arXiv Detail & Related papers (2020-03-02T23:39:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.