Enabling Resource-Aware Mapping of Spiking Neural Networks via Spatial
Decomposition
- URL: http://arxiv.org/abs/2009.09298v1
- Date: Sat, 19 Sep 2020 21:04:46 GMT
- Title: Enabling Resource-Aware Mapping of Spiking Neural Networks via Spatial
Decomposition
- Authors: Adarsha Balaji, Shihao Song, Anup Das, Jeffrey Krichmar, Nikil Dutt,
James Shackleford, Nagarajan Kandasamy, Francky Catthoor
- Abstract summary: Spiking Neural Network (SNN)-based applications to tile-based neuromorphic hardware are becoming increasingly challenging.
For complex SNN models that have many pre-synaptic connections per neuron, some connections may need to be pruned after training to fit onto the tile resources.
We propose a novel unrolling technique that decomposes a neuron function with many pre-synaptic connections into a sequence of homogeneous neural units.
- Score: 4.059246535401608
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With growing model complexity, mapping Spiking Neural Network (SNN)-based
applications to tile-based neuromorphic hardware is becoming increasingly
challenging. This is because the synaptic storage resources on a tile, viz. a
crossbar, can accommodate only a fixed number of pre-synaptic connections per
post-synaptic neuron. For complex SNN models that have many pre-synaptic
connections per neuron, some connections may need to be pruned after training
to fit onto the tile resources, leading to a loss in model quality, e.g.,
accuracy. In this work, we propose a novel unrolling technique that decomposes
a neuron function with many pre-synaptic connections into a sequence of
homogeneous neural units, where each neural unit is a function computation
node, with two pre-synaptic connections. This spatial decomposition technique
significantly improves crossbar utilization and retains all pre-synaptic
connections, resulting in no loss of the model quality derived from connection
pruning. We integrate the proposed technique within an existing SNN mapping
framework and evaluate it using machine learning applications on the DYNAP-SE
state-of-the-art neuromorphic hardware. Our results demonstrate an average 60%
lower crossbar requirement, 9x higher synapse utilization, 62% lower wasted
energy on the hardware, and between 0.8% and 4.6% increase in model quality.
Related papers
- Quantized Context Based LIF Neurons for Recurrent Spiking Neural Networks in 45nm [0.3332435791857516]
In this study, we propose the first hardware implementation of a context-based recurrent spiking neural network (RSNN)
We present a quantized version of the CLIF neuron (qCLIF), developed through a hardware-software codesign approach utilizing the sparse activity of RSNN.
Our analysis spans a network configuration from 10 to 200 qCLIF neurons, supporting up to 82k synapses within a 1.86 mm2 footprint, demonstrating scalability and efficiency.
arXiv Detail & Related papers (2024-04-28T04:32:44Z) - Learning in Convolutional Neural Networks Accelerated by Transfer Entropy [0.0]
In a feedforward network, the Transfer Entropy (TE) can be used to quantify the relationships between neuron output pairs located in different layers.
We introduce a novel training mechanism for CNN architectures which integrates the TE feedback connections.
arXiv Detail & Related papers (2024-04-03T13:31:49Z) - Low Precision Quantization-aware Training in Spiking Neural Networks
with Differentiable Quantization Function [0.5046831208137847]
This work aims to bridge the gap between recent progress in quantized neural networks and spiking neural networks.
It presents an extensive study on the performance of the quantization function, represented as a linear combination of sigmoid functions.
The presented quantization function demonstrates the state-of-the-art performance on four popular benchmarks.
arXiv Detail & Related papers (2023-05-30T09:42:05Z) - A Hybrid Neural Coding Approach for Pattern Recognition with Spiking
Neural Networks [53.31941519245432]
Brain-inspired spiking neural networks (SNNs) have demonstrated promising capabilities in solving pattern recognition tasks.
These SNNs are grounded on homogeneous neurons that utilize a uniform neural coding for information representation.
In this study, we argue that SNN architectures should be holistically designed to incorporate heterogeneous coding schemes.
arXiv Detail & Related papers (2023-05-26T02:52:12Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - Training Feedback Spiking Neural Networks by Implicit Differentiation on
the Equilibrium State [66.2457134675891]
Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware.
Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks.
We propose a novel training method that does not rely on the exact reverse of the forward computation.
arXiv Detail & Related papers (2021-09-29T07:46:54Z) - Hessian Aware Quantization of Spiking Neural Networks [1.90365714903665]
Neuromorphic architecture allows massively parallel computation with variable and local bit-precisions.
Current gradient based methods of SNN training use a complex neuron model with multiple state variables.
We present a simplified neuron model that reduces the number of state variables by 4-fold while still being compatible with gradient based training.
arXiv Detail & Related papers (2021-04-29T05:27:34Z) - Compiling Spiking Neural Networks to Mitigate Neuromorphic Hardware
Constraints [0.30458514384586394]
Spiking Neural Networks (SNNs) are efficient of computation-constrained pattern recognition on resource- and power-constrained platforms.
SNNs executed on neuromorphic hardware can further reduce energy consumption of these platforms.
arXiv Detail & Related papers (2020-11-27T19:10:23Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Non-linear Neurons with Human-like Apical Dendrite Activations [81.18416067005538]
We show that a standard neuron followed by our novel apical dendrite activation (ADA) can learn the XOR logical function with 100% accuracy.
We conduct experiments on six benchmark data sets from computer vision, signal processing and natural language processing.
arXiv Detail & Related papers (2020-02-02T21:09:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.