Quantized Context Based LIF Neurons for Recurrent Spiking Neural Networks in 45nm
- URL: http://arxiv.org/abs/2404.18066v1
- Date: Sun, 28 Apr 2024 04:32:44 GMT
- Title: Quantized Context Based LIF Neurons for Recurrent Spiking Neural Networks in 45nm
- Authors: Sai Sukruth Bezugam, Yihao Wu, JaeBum Yoo, Dmitri Strukov, Bongjin Kim,
- Abstract summary: In this study, we propose the first hardware implementation of a context-based recurrent spiking neural network (RSNN)
We present a quantized version of the CLIF neuron (qCLIF), developed through a hardware-software codesign approach utilizing the sparse activity of RSNN.
Our analysis spans a network configuration from 10 to 200 qCLIF neurons, supporting up to 82k synapses within a 1.86 mm2 footprint, demonstrating scalability and efficiency.
- Score: 0.3332435791857516
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this study, we propose the first hardware implementation of a context-based recurrent spiking neural network (RSNN) emphasizing on integrating dual information streams within the neocortical pyramidal neurons specifically Context- Dependent Leaky Integrate and Fire (CLIF) neuron models, essential element in RSNN. We present a quantized version of the CLIF neuron (qCLIF), developed through a hardware-software codesign approach utilizing the sparse activity of RSNN. Implemented in a 45nm technology node, the qCLIF is compact (900um^2) and achieves a high accuracy of 90% despite 8 bit quantization on DVS gesture classification dataset. Our analysis spans a network configuration from 10 to 200 qCLIF neurons, supporting up to 82k synapses within a 1.86 mm^2 footprint, demonstrating scalability and efficiency
Related papers
- Scalable Mechanistic Neural Networks [52.28945097811129]
We propose an enhanced neural network framework designed for scientific machine learning applications involving long temporal sequences.
By reformulating the original Mechanistic Neural Network (MNN) we reduce the computational time and space complexities from cubic and quadratic with respect to the sequence length, respectively, to linear.
Extensive experiments demonstrate that S-MNN matches the original MNN in precision while substantially reducing computational resources.
arXiv Detail & Related papers (2024-10-08T14:27:28Z) - Stochastic Spiking Neural Networks with First-to-Spike Coding [7.955633422160267]
Spiking Neural Networks (SNNs) are known for their bio-plausibility and energy efficiency.
In this work, we explore the merger of novel computing and information encoding schemes in SNN architectures.
We investigate the tradeoffs of our proposal in terms of accuracy, inference latency, spiking sparsity, energy consumption, and datasets.
arXiv Detail & Related papers (2024-04-26T22:52:23Z) - Low Precision Quantization-aware Training in Spiking Neural Networks
with Differentiable Quantization Function [0.5046831208137847]
This work aims to bridge the gap between recent progress in quantized neural networks and spiking neural networks.
It presents an extensive study on the performance of the quantization function, represented as a linear combination of sigmoid functions.
The presented quantization function demonstrates the state-of-the-art performance on four popular benchmarks.
arXiv Detail & Related papers (2023-05-30T09:42:05Z) - A Hybrid Neural Coding Approach for Pattern Recognition with Spiking
Neural Networks [53.31941519245432]
Brain-inspired spiking neural networks (SNNs) have demonstrated promising capabilities in solving pattern recognition tasks.
These SNNs are grounded on homogeneous neurons that utilize a uniform neural coding for information representation.
In this study, we argue that SNN architectures should be holistically designed to incorporate heterogeneous coding schemes.
arXiv Detail & Related papers (2023-05-26T02:52:12Z) - Extrapolation and Spectral Bias of Neural Nets with Hadamard Product: a
Polynomial Net Study [55.12108376616355]
The study on NTK has been devoted to typical neural network architectures, but is incomplete for neural networks with Hadamard products (NNs-Hp)
In this work, we derive the finite-width-K formulation for a special class of NNs-Hp, i.e., neural networks.
We prove their equivalence to the kernel regression predictor with the associated NTK, which expands the application scope of NTK.
arXiv Detail & Related papers (2022-09-16T06:36:06Z) - Decentralizing Feature Extraction with Quantum Convolutional Neural
Network for Automatic Speech Recognition [101.69873988328808]
We build upon a quantum convolutional neural network (QCNN) composed of a quantum circuit encoder for feature extraction.
An input speech is first up-streamed to a quantum computing server to extract Mel-spectrogram.
The corresponding convolutional features are encoded using a quantum circuit algorithm with random parameters.
The encoded features are then down-streamed to the local RNN model for the final recognition.
arXiv Detail & Related papers (2020-10-26T03:36:01Z) - Enabling Resource-Aware Mapping of Spiking Neural Networks via Spatial
Decomposition [4.059246535401608]
Spiking Neural Network (SNN)-based applications to tile-based neuromorphic hardware are becoming increasingly challenging.
For complex SNN models that have many pre-synaptic connections per neuron, some connections may need to be pruned after training to fit onto the tile resources.
We propose a novel unrolling technique that decomposes a neuron function with many pre-synaptic connections into a sequence of homogeneous neural units.
arXiv Detail & Related papers (2020-09-19T21:04:46Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Hardware Implementation of Spiking Neural Networks Using
Time-To-First-Spike Encoding [5.709318189772638]
Hardware-based spiking neural networks (SNNs) are regarded as promising candidates for the cognitive computing system.
In this work, we train the SNN in which the firing time carries information using temporal backpropagation.
The temporally encoded SNN with 512 hidden neurons showed an accuracy of 96.90% for the MNIST test set.
arXiv Detail & Related papers (2020-06-09T03:31:15Z) - Training of Quantized Deep Neural Networks using a Magnetic Tunnel
Junction-Based Synapse [23.08163992580639]
Quantized neural networks (QNNs) are being actively researched as a solution for the computational complexity and memory intensity of deep neural networks.
We show how magnetic tunnel junction (MTJ) devices can be used to support QNN training.
We introduce a novel synapse circuit that uses the MTJ behavior to support the quantize update.
arXiv Detail & Related papers (2019-12-29T11:36:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.