Reliability-Performance Trade-offs in Neuromorphic Computing
- URL: http://arxiv.org/abs/2009.12672v1
- Date: Sat, 26 Sep 2020 19:38:18 GMT
- Title: Reliability-Performance Trade-offs in Neuromorphic Computing
- Authors: Twisha Titirsha and Anup Das
- Abstract summary: Neuromorphic architectures built with Non-Volatile Memory (NVM) can significantly improve the energy efficiency of machine learning tasks designed with Spiking Neural Networks (SNNs)
We observe that the parasitic voltage drops create a significant asymmetry in programming speed and reliability of NVM cells in a crossbar.
This asymmetry in neuromorphic architectures create reliability-performance trade-offs, which can be exploited efficiently using SNN mapping techniques.
- Score: 0.30458514384586394
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neuromorphic architectures built with Non-Volatile Memory (NVM) can
significantly improve the energy efficiency of machine learning tasks designed
with Spiking Neural Networks (SNNs). A major source of voltage drop in a
crossbar of these architectures are the parasitic components on the crossbar's
bitlines and wordlines, which are deliberately made longer to achieve lower
cost-per-bit. We observe that the parasitic voltage drops create a significant
asymmetry in programming speed and reliability of NVM cells in a crossbar.
Specifically, NVM cells that are on shorter current paths are faster to program
but have lower endurance than those on longer current paths, and vice versa.
This asymmetry in neuromorphic architectures create reliability-performance
trade-offs, which can be exploited efficiently using SNN mapping techniques. In
this work, we demonstrate such trade-offs using a previously-proposed SNN
mapping technique with 10 workloads from contemporary machine learning tasks
for a state-of-the art neuromoorphic hardware.
Related papers
- Scalable Mechanistic Neural Networks [52.28945097811129]
We propose an enhanced neural network framework designed for scientific machine learning applications involving long temporal sequences.
By reformulating the original Mechanistic Neural Network (MNN) we reduce the computational time and space complexities from cubic and quadratic with respect to the sequence length, respectively, to linear.
Extensive experiments demonstrate that S-MNN matches the original MNN in precision while substantially reducing computational resources.
arXiv Detail & Related papers (2024-10-08T14:27:28Z) - SpikingJelly: An open-source machine learning infrastructure platform
for spike-based intelligence [51.6943465041708]
Spiking neural networks (SNNs) aim to realize brain-inspired intelligence on neuromorphic chips with high energy efficiency.
We contribute a full-stack toolkit for pre-processing neuromorphic datasets, building deep SNNs, optimizing their parameters, and deploying SNNs on neuromorphic chips.
arXiv Detail & Related papers (2023-10-25T13:15:17Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Energy-Efficient Deployment of Machine Learning Workloads on
Neuromorphic Hardware [0.11744028458220425]
Several edge deep learning hardware accelerators have been released that specifically focus on reducing the power and area consumed by deep neural networks (DNNs)
Spiked neural networks (SNNs) which operate on discrete time-series data have been shown to achieve substantial power reductions when deployed on specialized neuromorphic event-based/asynchronous hardware.
In this work, we provide a general guide to converting pre-trained DNNs into SNNs while also presenting techniques to improve the deployment of converted SNNs on neuromorphic hardware.
arXiv Detail & Related papers (2022-10-10T20:27:19Z) - A Resource-efficient Spiking Neural Network Accelerator Supporting
Emerging Neural Encoding [6.047137174639418]
Spiking neural networks (SNNs) recently gained momentum due to their low-power multiplication-free computing.
SNNs require very long spike trains (up to 1000) to reach an accuracy similar to their artificial neural network (ANN) counterparts for large models.
We present a novel hardware architecture that can efficiently support SNN with emerging neural encoding.
arXiv Detail & Related papers (2022-06-06T10:56:25Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - FPGA-optimized Hardware acceleration for Spiking Neural Networks [69.49429223251178]
This work presents the development of a hardware accelerator for an SNN, with off-line training, applied to an image recognition task.
The design targets a Xilinx Artix-7 FPGA, using in total around the 40% of the available hardware resources.
It reduces the classification time by three orders of magnitude, with a small 4.5% impact on the accuracy, if compared to its software, full precision counterpart.
arXiv Detail & Related papers (2022-01-18T13:59:22Z) - Online Training of Spiking Recurrent Neural Networks with Phase-Change
Memory Synapses [1.9809266426888898]
Training spiking neural networks (RNNs) on dedicated neuromorphic hardware is still an open challenge.
We present a simulation framework of differential-architecture arrays based on an accurate and comprehensive Phase-Change Memory (PCM) device model.
We train a spiking RNN whose weights are emulated in the presented simulation framework, using a recently proposed e-prop learning rule.
arXiv Detail & Related papers (2021-08-04T01:24:17Z) - Endurance-Aware Mapping of Spiking Neural Networks to Neuromorphic
Hardware [4.234079120512533]
Neuromorphic computing systems are embracing memristors to implement high density and low power synaptic storage as crossbar arrays in hardware.
Long bitlines and wordlines in a memristive crossbar are a major source of parasitic voltage drops, which create current asymmetry.
We propose eSpine, a technique to improve lifetime by incorporating the endurance variation within each crossbar in mapping machine learning workloads.
arXiv Detail & Related papers (2021-03-09T20:43:28Z) - You Only Spike Once: Improving Energy-Efficient Neuromorphic Inference
to ANN-Level Accuracy [51.861168222799186]
Spiking Neural Networks (SNNs) are a type of neuromorphic, or brain-inspired network.
SNNs are sparse, accessing very few weights, and typically only use addition operations instead of the more power-intensive multiply-and-accumulate operations.
In this work, we aim to overcome the limitations of TTFS-encoded neuromorphic systems.
arXiv Detail & Related papers (2020-06-03T15:55:53Z) - Multi-Objective Optimization for Size and Resilience of Spiking Neural
Networks [0.9449650062296823]
Neuromorphic computing architectures model Spiking Neural Networks (SNNs) in silicon.
We study Spiking Neural Networks in two neuromorphic architecture implementations with the goal of decreasing their size.
We propose a multiobjective fitness function to optimize the size and resiliency of the SNN.
arXiv Detail & Related papers (2020-02-04T16:58:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.