Thermal-Aware Compilation of Spiking Neural Networks to Neuromorphic
Hardware
- URL: http://arxiv.org/abs/2010.04773v2
- Date: Thu, 17 Dec 2020 21:26:34 GMT
- Title: Thermal-Aware Compilation of Spiking Neural Networks to Neuromorphic
Hardware
- Authors: Twisha Titirsha and Anup Das
- Abstract summary: We propose a technique to map neurons and synapses of SNN-based machine learning workloads to neuromorphic hardware.
We demonstrate an average 11.4 reduction in the average temperature of each crossbar in the hardware, leading to a 52% reduction in the leakage power consumption.
- Score: 0.30458514384586394
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Hardware implementation of neuromorphic computing can significantly improve
performance and energy efficiency of machine learning tasks implemented with
spiking neural networks (SNNs), making these hardware platforms particularly
suitable for embedded systems and other energy-constrained environments. We
observe that the long bitlines and wordlines in a crossbar of the hardware
create significant current variations when propagating spikes through its
synaptic elements, which are typically designed with non-volatile memory (NVM).
Such current variations create a thermal gradient within each crossbar of the
hardware, depending on the machine learning workload and the mapping of neurons
and synapses of the workload to these crossbars. \mr{This thermal gradient
becomes significant at scaled technology nodes and it increases the leakage
power in the hardware leading to an increase in the energy consumption.} We
propose a novel technique to map neurons and synapses of SNN-based machine
learning workloads to neuromorphic hardware. We make two novel contributions.
First, we formulate a detailed thermal model for a crossbar in a neuromorphic
hardware incorporating workload dependency, where the temperature of each
NVM-based synaptic cell is computed considering the thermal contributions from
its neighboring cells. Second, we incorporate this thermal model in the mapping
of neurons and synapses of SNN-based workloads using a hill-climbing heuristic.
The objective is to reduce the thermal gradient in crossbars. We evaluate our
neuron and synapse mapping technique using 10 machine learning workloads for a
state-of-the-art neuromorphic hardware. We demonstrate an average 11.4K
reduction in the average temperature of each crossbar in the hardware, leading
to a 52% reduction in the leakage power consumption (11% lower total energy
consumption) compared to a performance-oriented SNN mapping technique.
Related papers
- Analog Spiking Neuron in CMOS 28 nm Towards Large-Scale Neuromorphic Processors [0.8426358786287627]
In this work, we present a low-power Leaky Integrate-and-Fire neuron design fabricated in TSMC's 28 nm CMOS technology.
The fabricated neuron consumes 1.61 fJ/spike and occupies an active area of 34 $mu m2$, leading to a maximum spiking frequency of 300 kHz at 250 mV power supply.
arXiv Detail & Related papers (2024-08-14T17:51:20Z) - Resistive Memory-based Neural Differential Equation Solver for Score-based Diffusion Model [55.116403765330084]
Current AIGC methods, such as score-based diffusion, are still deficient in terms of rapidity and efficiency.
We propose a time-continuous and analog in-memory neural differential equation solver for score-based diffusion.
We experimentally validate our solution with 180 nm resistive memory in-memory computing macros.
arXiv Detail & Related papers (2024-04-08T16:34:35Z) - Single Neuromorphic Memristor closely Emulates Multiple Synaptic
Mechanisms for Energy Efficient Neural Networks [71.79257685917058]
We demonstrate memristive nano-devices based on SrTiO3 that inherently emulate all these synaptic functions.
These memristors operate in a non-filamentary, low conductance regime, which enables stable and energy efficient operation.
arXiv Detail & Related papers (2024-02-26T15:01:54Z) - A Resource-efficient Spiking Neural Network Accelerator Supporting
Emerging Neural Encoding [6.047137174639418]
Spiking neural networks (SNNs) recently gained momentum due to their low-power multiplication-free computing.
SNNs require very long spike trains (up to 1000) to reach an accuracy similar to their artificial neural network (ANN) counterparts for large models.
We present a novel hardware architecture that can efficiently support SNN with emerging neural encoding.
arXiv Detail & Related papers (2022-06-06T10:56:25Z) - An Adiabatic Capacitive Artificial Neuron with RRAM-based Threshold
Detection for Energy-Efficient Neuromorphic Computing [62.997667081978825]
We present an artificial neuron featuring adiabatic synapse capacitors to produce membrane potentials for the somas of neurons.
Our initial 4-bit adiabatic capacitive neuron proof-of-concept example shows 90% synaptic energy saving.
arXiv Detail & Related papers (2022-02-02T17:12:22Z) - FPGA-optimized Hardware acceleration for Spiking Neural Networks [69.49429223251178]
This work presents the development of a hardware accelerator for an SNN, with off-line training, applied to an image recognition task.
The design targets a Xilinx Artix-7 FPGA, using in total around the 40% of the available hardware resources.
It reduces the classification time by three orders of magnitude, with a small 4.5% impact on the accuracy, if compared to its software, full precision counterpart.
arXiv Detail & Related papers (2022-01-18T13:59:22Z) - Training Feedback Spiking Neural Networks by Implicit Differentiation on
the Equilibrium State [66.2457134675891]
Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware.
Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks.
We propose a novel training method that does not rely on the exact reverse of the forward computation.
arXiv Detail & Related papers (2021-09-29T07:46:54Z) - Mapping and Validating a Point Neuron Model on Intel's Neuromorphic
Hardware Loihi [77.34726150561087]
We investigate the potential of Intel's fifth generation neuromorphic chip - Loihi'
Loihi is based on the novel idea of Spiking Neural Networks (SNNs) emulating the neurons in the brain.
We find that Loihi replicates classical simulations very efficiently and scales notably well in terms of both time and energy performance as the networks get larger.
arXiv Detail & Related papers (2021-09-22T16:52:51Z) - Endurance-Aware Mapping of Spiking Neural Networks to Neuromorphic
Hardware [4.234079120512533]
Neuromorphic computing systems are embracing memristors to implement high density and low power synaptic storage as crossbar arrays in hardware.
Long bitlines and wordlines in a memristive crossbar are a major source of parasitic voltage drops, which create current asymmetry.
We propose eSpine, a technique to improve lifetime by incorporating the endurance variation within each crossbar in mapping machine learning workloads.
arXiv Detail & Related papers (2021-03-09T20:43:28Z) - Compiling Spiking Neural Networks to Mitigate Neuromorphic Hardware
Constraints [0.30458514384586394]
Spiking Neural Networks (SNNs) are efficient of computation-constrained pattern recognition on resource- and power-constrained platforms.
SNNs executed on neuromorphic hardware can further reduce energy consumption of these platforms.
arXiv Detail & Related papers (2020-11-27T19:10:23Z) - Enabling Resource-Aware Mapping of Spiking Neural Networks via Spatial
Decomposition [4.059246535401608]
Spiking Neural Network (SNN)-based applications to tile-based neuromorphic hardware are becoming increasingly challenging.
For complex SNN models that have many pre-synaptic connections per neuron, some connections may need to be pruned after training to fit onto the tile resources.
We propose a novel unrolling technique that decomposes a neuron function with many pre-synaptic connections into a sequence of homogeneous neural units.
arXiv Detail & Related papers (2020-09-19T21:04:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.