Sparsity-Aware Hardware-Software Co-Design of Spiking Neural Networks: An Overview
- URL: http://arxiv.org/abs/2408.14437v1
- Date: Mon, 26 Aug 2024 17:22:11 GMT
- Title: Sparsity-Aware Hardware-Software Co-Design of Spiking Neural Networks: An Overview
- Authors: Ilkin Aliyev, Kama Svoboda, Tosiron Adegbija, Jean-Marc Fellous,
- Abstract summary: Spiking Neural Networks (SNNs) are inspired by the sparse and event-driven nature of biological neural processing, and offer the potential for ultra-low-power artificial intelligence.
We explore the hardware-software co-design of sparse SNNs, examining how sparsity representation, hardware architectures, and training techniques influence hardware efficiency.
Our work aims to illuminate the path towards embedded neuromorphic systems that fully exploit the computational advantages of sparse SNNs.
- Score: 1.0499611180329804
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Spiking Neural Networks (SNNs) are inspired by the sparse and event-driven nature of biological neural processing, and offer the potential for ultra-low-power artificial intelligence. However, realizing their efficiency benefits requires specialized hardware and a co-design approach that effectively leverages sparsity. We explore the hardware-software co-design of sparse SNNs, examining how sparsity representation, hardware architectures, and training techniques influence hardware efficiency. We analyze the impact of static and dynamic sparsity, discuss the implications of different neuron models and encoding schemes, and investigate the need for adaptability in hardware designs. Our work aims to illuminate the path towards embedded neuromorphic systems that fully exploit the computational advantages of sparse SNNs.
Related papers
- Detection of Fast-Moving Objects with Neuromorphic Hardware [12.323012135924374]
Spiking Neural Networks (SNNs) are often viewed as the next generation of Neural Networks (NNs)
Neuromorphic Computing (NC) and SNNs in particular are often viewed as the next generation of Neural Networks (NNs)
arXiv Detail & Related papers (2024-03-15T20:53:10Z) - Mechanistic Neural Networks for Scientific Machine Learning [58.99592521721158]
We present Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences.
It incorporates a new Mechanistic Block in standard architectures to explicitly learn governing differential equations as representations.
Central to our approach is a novel Relaxed Linear Programming solver (NeuRLP) inspired by a technique that reduces solving linear ODEs to solving linear programs.
arXiv Detail & Related papers (2024-02-20T15:23:24Z) - Free-Space Optical Spiking Neural Network [0.0]
We introduce the Free-space Optical deep Spiking Convolutional Neural Network (OSCNN)
This novel approach draws inspiration from computational models of the human eye.
Our results demonstrate promising performance with minimal latency and power consumption compared to their electronic ONN counterparts.
arXiv Detail & Related papers (2023-11-08T09:41:14Z) - SpikingJelly: An open-source machine learning infrastructure platform
for spike-based intelligence [51.6943465041708]
Spiking neural networks (SNNs) aim to realize brain-inspired intelligence on neuromorphic chips with high energy efficiency.
We contribute a full-stack toolkit for pre-processing neuromorphic datasets, building deep SNNs, optimizing their parameters, and deploying SNNs on neuromorphic chips.
arXiv Detail & Related papers (2023-10-25T13:15:17Z) - A Hybrid Neural Coding Approach for Pattern Recognition with Spiking
Neural Networks [53.31941519245432]
Brain-inspired spiking neural networks (SNNs) have demonstrated promising capabilities in solving pattern recognition tasks.
These SNNs are grounded on homogeneous neurons that utilize a uniform neural coding for information representation.
In this study, we argue that SNN architectures should be holistically designed to incorporate heterogeneous coding schemes.
arXiv Detail & Related papers (2023-05-26T02:52:12Z) - The Hardware Impact of Quantization and Pruning for Weights in Spiking
Neural Networks [0.368986335765876]
quantization and pruning of parameters can both compress the model size, reduce memory footprints, and facilitate low-latency execution.
We study various combinations of pruning and quantization in isolation, cumulatively, and simultaneously to a state-of-the-art SNN targeting gesture recognition.
We show that this state-of-the-art model is amenable to aggressive parameter quantization, not suffering from any loss in accuracy down to ternary weights.
arXiv Detail & Related papers (2023-02-08T16:25:20Z) - FPGA-optimized Hardware acceleration for Spiking Neural Networks [69.49429223251178]
This work presents the development of a hardware accelerator for an SNN, with off-line training, applied to an image recognition task.
The design targets a Xilinx Artix-7 FPGA, using in total around the 40% of the available hardware resources.
It reduces the classification time by three orders of magnitude, with a small 4.5% impact on the accuracy, if compared to its software, full precision counterpart.
arXiv Detail & Related papers (2022-01-18T13:59:22Z) - Silicon photonic subspace neural chip for hardware-efficient deep
learning [11.374005508708995]
optical neural network (ONN) is a promising candidate for next-generation neurocomputing.
We devise a hardware-efficient photonic subspace neural network architecture.
We experimentally demonstrate our PSNN on a butterfly-style programmable silicon photonic integrated circuit.
arXiv Detail & Related papers (2021-11-11T06:34:05Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Spiking Neural Networks Hardware Implementations and Challenges: a
Survey [53.429871539789445]
Spiking Neural Networks are cognitive algorithms mimicking neuron and synapse operational principles.
We present the state of the art of hardware implementations of spiking neural networks.
We discuss the strategies employed to leverage the characteristics of these event-driven algorithms at the hardware level.
arXiv Detail & Related papers (2020-05-04T13:24:00Z) - Multi-Objective Optimization for Size and Resilience of Spiking Neural
Networks [0.9449650062296823]
Neuromorphic computing architectures model Spiking Neural Networks (SNNs) in silicon.
We study Spiking Neural Networks in two neuromorphic architecture implementations with the goal of decreasing their size.
We propose a multiobjective fitness function to optimize the size and resiliency of the SNN.
arXiv Detail & Related papers (2020-02-04T16:58:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.