Fine-Tuning Surrogate Gradient Learning for Optimal Hardware Performance
in Spiking Neural Networks
- URL: http://arxiv.org/abs/2402.06211v1
- Date: Fri, 9 Feb 2024 06:38:12 GMT
- Title: Fine-Tuning Surrogate Gradient Learning for Optimal Hardware Performance
in Spiking Neural Networks
- Authors: Ilkin Aliyev and Tosiron Adegbija
- Abstract summary: Spiking Neural Networks (SNNs) can provide tremendous energy efficiency benefits when carefully exploited in hardware.
This work reveals novel insights into the impacts of training on hardware performance.
- Score: 1.52292571922932
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The highly sparse activations in Spiking Neural Networks (SNNs) can provide
tremendous energy efficiency benefits when carefully exploited in hardware. The
behavior of sparsity in SNNs is uniquely shaped by the dataset and training
hyperparameters. This work reveals novel insights into the impacts of training
on hardware performance. Specifically, we explore the trade-offs between model
accuracy and hardware efficiency. We focus on three key hyperparameters:
surrogate gradient functions, beta, and membrane threshold. Results on an
FPGA-based hardware platform show that the fast sigmoid surrogate function
yields a lower firing rate with similar accuracy compared to the arctangent
surrogate on the SVHN dataset. Furthermore, by cross-sweeping the beta and
membrane threshold hyperparameters, we can achieve a 48% reduction in
hardware-based inference latency with only 2.88% trade-off in inference
accuracy compared to the default setting. Overall, this study highlights the
importance of fine-tuning model hyperparameters as crucial for designing
efficient SNN hardware accelerators, evidenced by the fine-tuned model
achieving a 1.72x improvement in accelerator efficiency (FPS/W) compared to the
most recent work.
Related papers
- Inference-to-complete: A High-performance and Programmable Data-plane Co-processor for Neural-network-driven Traffic Analysis [18.75879653408466]
NN-driven intelligent data-plane (NN-driven IDP) is becoming an emerging topic for excellent accuracy and high performance.
Kaleidoscope is a flexible and high-performance co-processor located at the bypass of the data-plane.
Kaleidoscope reaches 256-352 ns inference latency and 100 Gbps throughput with negligible influence on the data-plane.
arXiv Detail & Related papers (2024-11-01T07:10:08Z) - Task-Oriented Real-time Visual Inference for IoVT Systems: A Co-design Framework of Neural Networks and Edge Deployment [61.20689382879937]
Task-oriented edge computing addresses this by shifting data analysis to the edge.
Existing methods struggle to balance high model performance with low resource consumption.
We propose a novel co-design framework to optimize neural network architecture.
arXiv Detail & Related papers (2024-10-29T19:02:54Z) - Efficient Hyperparameter Importance Assessment for CNNs [1.7778609937758323]
This paper aims to quantify the importance weights of some hyperparameters in Convolutional Neural Networks (CNNs) with an algorithm called N-RReliefF.
We conduct an extensive study by training over ten thousand CNN models across ten popular image classification datasets.
arXiv Detail & Related papers (2024-10-11T15:47:46Z) - Hardware-Software Co-optimised Fast and Accurate Deep Reconfigurable Spiking Inference Accelerator Architecture Design Methodology [2.968768532937366]
Spiking Neural Networks (SNNs) have emerged as a promising approach to improve the energy efficiency of machine learning models.
We develop a hardware-software co-optimisation strategy to port software-trained deep neural networks (DNN) to reduced-precision spiking models.
arXiv Detail & Related papers (2024-10-07T05:04:13Z) - ZOBNN: Zero-Overhead Dependable Design of Binary Neural Networks with Deliberately Quantized Parameters [0.0]
In this paper, we introduce a third advantage of very low-precision neural networks: improved fault-tolerance.
We investigate the impact of memory faults on state-of-the-art binary neural networks (BNNs) through comprehensive analysis.
We propose a technique to improve BNN dependability by restricting the range of float parameters through a novel deliberately uniform quantization.
arXiv Detail & Related papers (2024-07-06T05:31:11Z) - Robust Learning with Progressive Data Expansion Against Spurious
Correlation [65.83104529677234]
We study the learning process of a two-layer nonlinear convolutional neural network in the presence of spurious features.
Our analysis suggests that imbalanced data groups and easily learnable spurious features can lead to the dominance of spurious features during the learning process.
We propose a new training algorithm called PDE that efficiently enhances the model's robustness for a better worst-group performance.
arXiv Detail & Related papers (2023-06-08T05:44:06Z) - The Hardware Impact of Quantization and Pruning for Weights in Spiking
Neural Networks [0.368986335765876]
quantization and pruning of parameters can both compress the model size, reduce memory footprints, and facilitate low-latency execution.
We study various combinations of pruning and quantization in isolation, cumulatively, and simultaneously to a state-of-the-art SNN targeting gesture recognition.
We show that this state-of-the-art model is amenable to aggressive parameter quantization, not suffering from any loss in accuracy down to ternary weights.
arXiv Detail & Related papers (2023-02-08T16:25:20Z) - Hyper-Parameter Auto-Tuning for Sparse Bayesian Learning [72.83293818245978]
We design and learn a neural network (NN)-based auto-tuner for hyper- parameter tuning in sparse Bayesian learning.
We show that considerable improvement in convergence rate and recovery performance can be achieved.
arXiv Detail & Related papers (2022-11-09T12:34:59Z) - Efficient Graph Neural Network Inference at Large Scale [54.89457550773165]
Graph neural networks (GNNs) have demonstrated excellent performance in a wide range of applications.
Existing scalable GNNs leverage linear propagation to preprocess the features and accelerate the training and inference procedure.
We propose a novel adaptive propagation order approach that generates the personalized propagation order for each node based on its topological information.
arXiv Detail & Related papers (2022-11-01T14:38:18Z) - FPGA-optimized Hardware acceleration for Spiking Neural Networks [69.49429223251178]
This work presents the development of a hardware accelerator for an SNN, with off-line training, applied to an image recognition task.
The design targets a Xilinx Artix-7 FPGA, using in total around the 40% of the available hardware resources.
It reduces the classification time by three orders of magnitude, with a small 4.5% impact on the accuracy, if compared to its software, full precision counterpart.
arXiv Detail & Related papers (2022-01-18T13:59:22Z) - Highly Efficient Salient Object Detection with 100K Parameters [137.74898755102387]
We propose a flexible convolutional module, namely generalized OctConv (gOctConv), to efficiently utilize both in-stage and cross-stages multi-scale features.
We build an extremely light-weighted model, namely CSNet, which achieves comparable performance with about 0.2% (100k) of large models on popular object detection benchmarks.
arXiv Detail & Related papers (2020-03-12T07:00:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.