Endurance-Aware Mapping of Spiking Neural Networks to Neuromorphic
Hardware
- URL: http://arxiv.org/abs/2103.05707v1
- Date: Tue, 9 Mar 2021 20:43:28 GMT
- Title: Endurance-Aware Mapping of Spiking Neural Networks to Neuromorphic
Hardware
- Authors: Twisha Titirsha, Shihao Song, Anup Das, Jeffrey Krichmar, Nikil Dutt,
Nagarajan Kandasamy, Francky Catthoor
- Abstract summary: Neuromorphic computing systems are embracing memristors to implement high density and low power synaptic storage as crossbar arrays in hardware.
Long bitlines and wordlines in a memristive crossbar are a major source of parasitic voltage drops, which create current asymmetry.
We propose eSpine, a technique to improve lifetime by incorporating the endurance variation within each crossbar in mapping machine learning workloads.
- Score: 4.234079120512533
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neuromorphic computing systems are embracing memristors to implement high
density and low power synaptic storage as crossbar arrays in hardware. These
systems are energy efficient in executing Spiking Neural Networks (SNNs). We
observe that long bitlines and wordlines in a memristive crossbar are a major
source of parasitic voltage drops, which create current asymmetry. Through
circuit simulations, we show the significant endurance variation that results
from this asymmetry. Therefore, if the critical memristors (ones with lower
endurance) are overutilized, they may lead to a reduction of the crossbar's
lifetime. We propose eSpine, a novel technique to improve lifetime by
incorporating the endurance variation within each crossbar in mapping machine
learning workloads, ensuring that synapses with higher activation are always
implemented on memristors with higher endurance, and vice versa. eSpine works
in two steps. First, it uses the Kernighan-Lin Graph Partitioning algorithm to
partition a workload into clusters of neurons and synapses, where each cluster
can fit in a crossbar. Second, it uses an instance of Particle Swarm
Optimization (PSO) to map clusters to tiles, where the placement of synapses of
a cluster to memristors of a crossbar is performed by analyzing their
activation within the workload. We evaluate eSpine for a state-of-the-art
neuromorphic hardware model with phase-change memory (PCM)-based memristors.
Using 10 SNN workloads, we demonstrate a significant improvement in the
effective lifetime.
Related papers
- Single Neuromorphic Memristor closely Emulates Multiple Synaptic
Mechanisms for Energy Efficient Neural Networks [71.79257685917058]
We demonstrate memristive nano-devices based on SrTiO3 that inherently emulate all these synaptic functions.
These memristors operate in a non-filamentary, low conductance regime, which enables stable and energy efficient operation.
arXiv Detail & Related papers (2024-02-26T15:01:54Z) - Fully Spiking Actor Network with Intra-layer Connections for
Reinforcement Learning [51.386945803485084]
We focus on the task where the agent needs to learn multi-dimensional deterministic policies to control.
Most existing spike-based RL methods take the firing rate as the output of SNNs, and convert it to represent continuous action space (i.e., the deterministic policy) through a fully-connected layer.
To develop a fully spiking actor network without any floating-point matrix operations, we draw inspiration from the non-spiking interneurons found in insects.
arXiv Detail & Related papers (2024-01-09T07:31:34Z) - Heterogenous Memory Augmented Neural Networks [84.29338268789684]
We introduce a novel heterogeneous memory augmentation approach for neural networks.
By introducing learnable memory tokens with attention mechanism, we can effectively boost performance without huge computational overhead.
We show our approach on various image and graph-based tasks under both in-distribution (ID) and out-of-distribution (OOD) conditions.
arXiv Detail & Related papers (2023-10-17T01:05:28Z) - The Expressive Leaky Memory Neuron: an Efficient and Expressive Phenomenological Neuron Model Can Solve Long-Horizon Tasks [64.08042492426992]
We introduce the Expressive Memory (ELM) neuron model, a biologically inspired model of a cortical neuron.
Our ELM neuron can accurately match the aforementioned input-output relationship with under ten thousand trainable parameters.
We evaluate it on various tasks with demanding temporal structures, including the Long Range Arena (LRA) datasets.
arXiv Detail & Related papers (2023-06-14T13:34:13Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Gradient-based Neuromorphic Learning on Dynamical RRAM Arrays [3.5969667977870796]
We present MEMprop, the adoption of gradient-based learning to train fully memristive spiking neural networks (MSNNs)
Our approach harnesses intrinsic device dynamics to trigger naturally arising voltage spikes.
We obtain highly competitive accuracy amongst previously reported lightweight dense fully MSNNs on several benchmarks.
arXiv Detail & Related papers (2022-06-26T23:13:34Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - Efficient Hardware Acceleration of Sparsely Active Convolutional Spiking
Neural Networks [0.0]
Spiking Neural Networks (SNNs) compute in an event-based matter to achieve a more efficient computation than standard Neural Networks.
We propose a novel architecture that is optimized for the processing of Convolutional SNNs that feature a high degree of activation sparsity.
arXiv Detail & Related papers (2022-03-23T14:18:58Z) - Compiling Spiking Neural Networks to Mitigate Neuromorphic Hardware
Constraints [0.30458514384586394]
Spiking Neural Networks (SNNs) are efficient of computation-constrained pattern recognition on resource- and power-constrained platforms.
SNNs executed on neuromorphic hardware can further reduce energy consumption of these platforms.
arXiv Detail & Related papers (2020-11-27T19:10:23Z) - Thermal-Aware Compilation of Spiking Neural Networks to Neuromorphic
Hardware [0.30458514384586394]
We propose a technique to map neurons and synapses of SNN-based machine learning workloads to neuromorphic hardware.
We demonstrate an average 11.4 reduction in the average temperature of each crossbar in the hardware, leading to a 52% reduction in the leakage power consumption.
arXiv Detail & Related papers (2020-10-09T19:29:14Z) - Reliability-Performance Trade-offs in Neuromorphic Computing [0.30458514384586394]
Neuromorphic architectures built with Non-Volatile Memory (NVM) can significantly improve the energy efficiency of machine learning tasks designed with Spiking Neural Networks (SNNs)
We observe that the parasitic voltage drops create a significant asymmetry in programming speed and reliability of NVM cells in a crossbar.
This asymmetry in neuromorphic architectures create reliability-performance trade-offs, which can be exploited efficiently using SNN mapping techniques.
arXiv Detail & Related papers (2020-09-26T19:38:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.