To Spike or Not to Spike? A Quantitative Comparison of SNN and CNN FPGA
Implementations
- URL: http://arxiv.org/abs/2306.12742v1
- Date: Thu, 22 Jun 2023 08:47:09 GMT
- Title: To Spike or Not to Spike? A Quantitative Comparison of SNN and CNN FPGA
Implementations
- Authors: Patrick Plagwitz, Frank Hannig, J\"urgen Teich, Oliver Keszocze
- Abstract summary: Spiking Neural Networks (SNNs) are an emerging alternative to CNN implementations, promising higher resource and energy efficiency.
We present a novel encoding scheme of spike event queues and a novel memory organization technique to improve SNN energy efficiency.
For small-scale benchmarks such as MNIST, SNN designs provide rather no or little latency and energy efficiency advantages over corresponding CNN implementations.
- Score: 0.4405963753136216
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Convolutional Neural Networks (CNNs) are widely employed to solve various
problems, e.g., image classification. Due to their compute- and data-intensive
nature, CNN accelerators have been developed as ASICs or on FPGAs. Increasing
complexity of applications has caused resource costs and energy requirements of
these accelerators to grow. Spiking Neural Networks (SNNs) are an emerging
alternative to CNN implementations, promising higher resource and energy
efficiency. The main research question addressed in this paper is whether SNN
accelerators truly meet these expectations of reduced energy requirements
compared to their CNN equivalents. For this purpose, we analyze multiple SNN
hardware accelerators for FPGAs regarding performance and energy efficiency. We
present a novel encoding scheme of spike event queues and a novel memory
organization technique to improve SNN energy efficiency further. Both
techniques have been integrated into a state-of-the-art SNN architecture and
evaluated for MNIST, SVHN, and CIFAR-10 datasets and corresponding network
architectures on two differently sized modern FPGA platforms. For small-scale
benchmarks such as MNIST, SNN designs provide rather no or little latency and
energy efficiency advantages over corresponding CNN implementations. For more
complex benchmarks such as SVHN and CIFAR-10, the trend reverses.
Related papers
- From Lightweight CNNs to SpikeNets: Benchmarking Accuracy-Energy Tradeoffs with Pruned Spiking SqueezeNet [2.8988539381054905]
Spiking Neural Networks (SNNs) are increasingly studied as energy-efficient alternatives to Convolutional Neural Networks (CNNs)<n>We present the first systematic benchmark of lightweight SNNs obtained by converting compact CNN architectures into spiking networks.
arXiv Detail & Related papers (2026-02-10T12:20:11Z) - FeNN-DMA: A RISC-V SoC for SNN acceleration [2.560446860313122]
Spiking Neural Networks (SNNs) are a promising, energy-efficient alternative to standard Artificial Neural Networks (ANNs)<n>We show that FeNN-DMA has comparable resource usage and energy requirements to state-of-the-art fixed-function SNN accelerators.<n>We demonstrate state-of-the-art classification accuracy on simulating the Spiking Heidelberg Digits and Neuromorphic MNIST tasks.
arXiv Detail & Related papers (2025-11-01T22:59:54Z) - S$^2$NN: Sub-bit Spiking Neural Networks [53.08060832135342]
Spiking Neural Networks (SNNs) offer an energy-efficient paradigm for machine intelligence.<n>Despite recent advances in binary SNNs, the storage and computational demands remain substantial for large-scale networks.<n>We propose Sub-bit Spiking Neural Networks (S$2$NNs) that represent weights with less than one bit.
arXiv Detail & Related papers (2025-09-29T04:17:44Z) - Proxy Target: Bridging the Gap Between Discrete Spiking Neural Networks and Continuous Control [59.65431931190187]
Spiking Neural Networks (SNNs) offer low-latency and energy-efficient decision making on neuromorphic hardware.<n>Most continuous control algorithms for continuous control are designed for Artificial Neural Networks (ANNs)<n>We show that this mismatch destabilizes SNN training and degrades performance.<n>We propose a novel proxy target framework to bridge the gap between discrete SNNs and continuous-control algorithms.
arXiv Detail & Related papers (2025-05-30T03:08:03Z) - SpikeX: Exploring Accelerator Architecture and Network-Hardware Co-Optimization for Sparse Spiking Neural Networks [3.758294848902233]
We propose a novel systolic-array SNN accelerator architecture, called SpikeX, to take on the challenges and opportunities stemming from unstructured sparsity.<n>SpikeX reduces memory access and increases data sharing and hardware utilization targeting computations spanning both time and space.
arXiv Detail & Related papers (2025-05-18T08:07:44Z) - Differential Coding for Training-Free ANN-to-SNN Conversion [45.70141988713627]
Spiking Neural Networks (SNNs) exhibit significant potential due to their low energy consumption.
converting Artificial Neural Networks (ANNs) to SNNs is an efficient way to achieve high-performance SNNs.
This article introduces differential coding for ANN-to-SNN conversion, a novel coding scheme that reduces spike counts and energy consumption.
arXiv Detail & Related papers (2025-03-01T02:17:35Z) - LitE-SNN: Designing Lightweight and Efficient Spiking Neural Network through Spatial-Temporal Compressive Network Search and Joint Optimization [48.41286573672824]
Spiking Neural Networks (SNNs) mimic the information-processing mechanisms of the human brain and are highly energy-efficient.
We propose a new approach named LitE-SNN that incorporates both spatial and temporal compression into the automated network design process.
arXiv Detail & Related papers (2024-01-26T05:23:11Z) - Spiker+: a framework for the generation of efficient Spiking Neural
Networks FPGA accelerators for inference at the edge [49.42371633618761]
Spiker+ is a framework for generating efficient, low-power, and low-area customized Spiking Neural Networks (SNN) accelerators on FPGA for inference at the edge.
Spiker+ is tested on two benchmark datasets, the MNIST and the Spiking Heidelberg Digits (SHD)
arXiv Detail & Related papers (2024-01-02T10:42:42Z) - SupeRBNN: Randomized Binary Neural Network Using Adiabatic
Superconductor Josephson Devices [44.440915387556544]
AQFP devices serve as excellent carriers for binary neural network (BNN) computations.
We propose SupeRBNN, an AQFP-based randomized BNN acceleration framework.
We show that our design achieves an energy efficiency of approximately 7.8x104 times higher than that of the ReRAM-based BNN framework.
arXiv Detail & Related papers (2023-09-21T16:14:42Z) - Are SNNs Truly Energy-efficient? $-$ A Hardware Perspective [7.539212567508529]
Spiking Neural Networks (SNNs) have gained attention for their energy-efficient machine learning capabilities.
This work studies two hardware benchmarking platforms for large-scale SNN inference, namely SATA and SpikeSim.
arXiv Detail & Related papers (2023-09-06T22:23:22Z) - SpikeSim: An end-to-end Compute-in-Memory Hardware Evaluation Tool for
Benchmarking Spiking Neural Networks [4.0300632886917]
SpikeSim is a tool that can perform realistic performance, energy, latency and area evaluation of IMC-mapped SNNs.
We propose SNN topological modifications leading to 1.24x and 10x reduction in the neuronal module's area and the overall energy-delay-product value.
arXiv Detail & Related papers (2022-10-24T01:07:17Z) - Weightless Neural Networks for Efficient Edge Inference [1.7882696915798877]
Weightless Neural Networks (WNNs) are a class of machine learning model which use table lookups to perform inference.
We propose a novel WNN architecture, BTHOWeN, with key algorithmic and architectural improvements over prior work.
BTHOWeN targets the large and growing edge computing sector by providing superior latency and energy efficiency.
arXiv Detail & Related papers (2022-03-03T01:46:05Z) - FPGA-optimized Hardware acceleration for Spiking Neural Networks [69.49429223251178]
This work presents the development of a hardware accelerator for an SNN, with off-line training, applied to an image recognition task.
The design targets a Xilinx Artix-7 FPGA, using in total around the 40% of the available hardware resources.
It reduces the classification time by three orders of magnitude, with a small 4.5% impact on the accuracy, if compared to its software, full precision counterpart.
arXiv Detail & Related papers (2022-01-18T13:59:22Z) - Can Deep Neural Networks be Converted to Ultra Low-Latency Spiking
Neural Networks? [3.2108350580418166]
Spiking neural networks (SNNs) operate via binary spikes distributed over time.
SOTA training strategies for SNNs involve conversion from a non-spiking deep neural network (DNN)
We propose a new training algorithm that accurately captures these distributions, minimizing the error between the DNN and converted SNN.
arXiv Detail & Related papers (2021-12-22T18:47:45Z) - Hybrid SNN-ANN: Energy-Efficient Classification and Object Detection for
Event-Based Vision [64.71260357476602]
Event-based vision sensors encode local pixel-wise brightness changes in streams of events rather than image frames.
Recent progress in object recognition from event-based sensors has come from conversions of deep neural networks.
We propose a hybrid architecture for end-to-end training of deep neural networks for event-based pattern recognition and object detection.
arXiv Detail & Related papers (2021-12-06T23:45:58Z) - You Only Spike Once: Improving Energy-Efficient Neuromorphic Inference
to ANN-Level Accuracy [51.861168222799186]
Spiking Neural Networks (SNNs) are a type of neuromorphic, or brain-inspired network.
SNNs are sparse, accessing very few weights, and typically only use addition operations instead of the more power-intensive multiply-and-accumulate operations.
In this work, we aim to overcome the limitations of TTFS-encoded neuromorphic systems.
arXiv Detail & Related papers (2020-06-03T15:55:53Z) - SHARP: An Adaptable, Energy-Efficient Accelerator for Recurrent Neural
Network [17.928105470385614]
We propose an intelligent tiled-based mechanism for increasing the adaptiveness of RNN, in order to efficiently handle the data dependencies.
Sharp achieves 2x, 2.8x, and 82x speedups on average, considering different RNN models and resource budgets.
arXiv Detail & Related papers (2019-11-04T14:51:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.