Compiling Spiking Neural Networks to Mitigate Neuromorphic Hardware
Constraints
- URL: http://arxiv.org/abs/2011.13965v1
- Date: Fri, 27 Nov 2020 19:10:23 GMT
- Title: Compiling Spiking Neural Networks to Mitigate Neuromorphic Hardware
Constraints
- Authors: Adarsha Balaji and Anup Das
- Abstract summary: Spiking Neural Networks (SNNs) are efficient of computation-constrained pattern recognition on resource- and power-constrained platforms.
SNNs executed on neuromorphic hardware can further reduce energy consumption of these platforms.
- Score: 0.30458514384586394
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Spiking Neural Networks (SNNs) are efficient computation models to perform
spatio-temporal pattern recognition on {resource}- and {power}-constrained
platforms. SNNs executed on neuromorphic hardware can further reduce energy
consumption of these platforms. With increasing model size and complexity,
mapping SNN-based applications to tile-based neuromorphic hardware is becoming
increasingly challenging. This is attributed to the limitations of
neuro-synaptic cores, viz. a crossbar, to accommodate only a fixed number of
pre-synaptic connections per post-synaptic neuron. For complex SNN-based models
that have many neurons and pre-synaptic connections per neuron, (1) connections
may need to be pruned after training to fit onto the crossbar resources,
leading to a loss in model quality, e.g., accuracy, and (2) the neurons and
synapses need to be partitioned and placed on the neuro-sypatic cores of the
hardware, which could lead to increased latency and energy consumption. In this
work, we propose (1) a novel unrolling technique that decomposes a neuron
function with many pre-synaptic connections into a sequence of homogeneous
neural units to significantly improve the crossbar utilization and retain all
pre-synaptic connections, and (2) SpiNeMap, a novel methodology to map SNNs on
neuromorphic hardware with an aim to minimize energy consumption and spike
latency.
Related papers
- Single Neuromorphic Memristor closely Emulates Multiple Synaptic
Mechanisms for Energy Efficient Neural Networks [71.79257685917058]
We demonstrate memristive nano-devices based on SrTiO3 that inherently emulate all these synaptic functions.
These memristors operate in a non-filamentary, low conductance regime, which enables stable and energy efficient operation.
arXiv Detail & Related papers (2024-02-26T15:01:54Z) - Deep Pulse-Coupled Neural Networks [31.65350290424234]
Neural Networks (SNNs) capture the information processing mechanism of the brain by taking advantage of neurons.
In this work, we leverage a more biologically plausible neural model with complex dynamics, i.e., a pulse-coupled neural network (PCNN)
We construct deep pulse-coupled neural networks (DPCNNs) by replacing commonly used LIF neurons in SNNs with PCNN neurons.
arXiv Detail & Related papers (2023-12-24T08:26:00Z) - SpikingJelly: An open-source machine learning infrastructure platform
for spike-based intelligence [51.6943465041708]
Spiking neural networks (SNNs) aim to realize brain-inspired intelligence on neuromorphic chips with high energy efficiency.
We contribute a full-stack toolkit for pre-processing neuromorphic datasets, building deep SNNs, optimizing their parameters, and deploying SNNs on neuromorphic chips.
arXiv Detail & Related papers (2023-10-25T13:15:17Z) - The Expressive Leaky Memory Neuron: an Efficient and Expressive Phenomenological Neuron Model Can Solve Long-Horizon Tasks [64.08042492426992]
We introduce the Expressive Memory (ELM) neuron model, a biologically inspired model of a cortical neuron.
Our ELM neuron can accurately match the aforementioned input-output relationship with under ten thousand trainable parameters.
We evaluate it on various tasks with demanding temporal structures, including the Long Range Arena (LRA) datasets.
arXiv Detail & Related papers (2023-06-14T13:34:13Z) - A Hybrid Neural Coding Approach for Pattern Recognition with Spiking
Neural Networks [53.31941519245432]
Brain-inspired spiking neural networks (SNNs) have demonstrated promising capabilities in solving pattern recognition tasks.
These SNNs are grounded on homogeneous neurons that utilize a uniform neural coding for information representation.
In this study, we argue that SNN architectures should be holistically designed to incorporate heterogeneous coding schemes.
arXiv Detail & Related papers (2023-05-26T02:52:12Z) - Event-Driven Tactile Learning with Location Spiking Neurons [5.822511654546528]
Spiking Neural Networks (SNNs) enable event-driven tactile learning.
We develop a novel neuron model called "location spiking neuron"
We show the superior energy efficiency of our models over other works on event-driven learning.
arXiv Detail & Related papers (2022-07-23T12:15:43Z) - A Resource-efficient Spiking Neural Network Accelerator Supporting
Emerging Neural Encoding [6.047137174639418]
Spiking neural networks (SNNs) recently gained momentum due to their low-power multiplication-free computing.
SNNs require very long spike trains (up to 1000) to reach an accuracy similar to their artificial neural network (ANN) counterparts for large models.
We present a novel hardware architecture that can efficiently support SNN with emerging neural encoding.
arXiv Detail & Related papers (2022-06-06T10:56:25Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - Thermal-Aware Compilation of Spiking Neural Networks to Neuromorphic
Hardware [0.30458514384586394]
We propose a technique to map neurons and synapses of SNN-based machine learning workloads to neuromorphic hardware.
We demonstrate an average 11.4 reduction in the average temperature of each crossbar in the hardware, leading to a 52% reduction in the leakage power consumption.
arXiv Detail & Related papers (2020-10-09T19:29:14Z) - Enabling Resource-Aware Mapping of Spiking Neural Networks via Spatial
Decomposition [4.059246535401608]
Spiking Neural Network (SNN)-based applications to tile-based neuromorphic hardware are becoming increasingly challenging.
For complex SNN models that have many pre-synaptic connections per neuron, some connections may need to be pruned after training to fit onto the tile resources.
We propose a novel unrolling technique that decomposes a neuron function with many pre-synaptic connections into a sequence of homogeneous neural units.
arXiv Detail & Related papers (2020-09-19T21:04:46Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.