THOR -- A Neuromorphic Processor with 7.29G TSOP$^2$/mm$^2$Js
Energy-Throughput Efficiency
- URL: http://arxiv.org/abs/2212.01696v1
- Date: Sat, 3 Dec 2022 21:36:29 GMT
- Title: THOR -- A Neuromorphic Processor with 7.29G TSOP$^2$/mm$^2$Js
Energy-Throughput Efficiency
- Authors: Mayank Senapati, Manil Dev Gomony, Sherif Eissa, Charlotte Frenkel,
and Henk Corporaal
- Abstract summary: Neuromorphic computing using biologically inspired Spiking Neural Networks (SNNs) is a promising solution to meet Energy-Throughput (ET) efficiency needed for edge computing devices.
We present THOR, an all-digital neuromorphic processor with a novel memory hierarchy and neuron update architecture that addresses both energy consumption and throughput bottlenecks.
- Score: 2.260725478207432
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neuromorphic computing using biologically inspired Spiking Neural Networks
(SNNs) is a promising solution to meet Energy-Throughput (ET) efficiency needed
for edge computing devices. Neuromorphic hardware architectures that emulate
SNNs in analog/mixed-signal domains have been proposed to achieve
order-of-magnitude higher energy efficiency than all-digital architectures,
however at the expense of limited scalability, susceptibility to noise, complex
verification, and poor flexibility. On the other hand, state-of-the-art digital
neuromorphic architectures focus either on achieving high energy efficiency
(Joules/synaptic operation (SOP)) or throughput efficiency (SOPs/second/area),
resulting in poor ET efficiency. In this work, we present THOR, an all-digital
neuromorphic processor with a novel memory hierarchy and neuron update
architecture that addresses both energy consumption and throughput bottlenecks.
We implemented THOR in 28nm FDSOI CMOS technology and our post-layout results
demonstrate an ET efficiency of 7.29G $\text{TSOP}^2/\text{mm}^2\text{Js}$ at
0.9V, 400 MHz, which represents a 3X improvement over state-of-the-art digital
neuromorphic processors.
Related papers
- Neuromorphic Wireless Split Computing with Multi-Level Spikes [69.73249913506042]
In neuromorphic computing, spiking neural networks (SNNs) perform inference tasks, offering significant efficiency gains for workloads involving sequential data.
Recent advances in hardware and software have demonstrated that embedding a few bits of payload in each spike exchanged between the spiking neurons can further enhance inference accuracy.
This paper investigates a wireless neuromorphic split computing architecture employing multi-level SNNs.
arXiv Detail & Related papers (2024-11-07T14:08:35Z) - Analog Spiking Neuron in CMOS 28 nm Towards Large-Scale Neuromorphic Processors [0.8426358786287627]
In this work, we present a low-power Leaky Integrate-and-Fire neuron design fabricated in TSMC's 28 nm CMOS technology.
The fabricated neuron consumes 1.61 fJ/spike and occupies an active area of 34 $mu m2$, leading to a maximum spiking frequency of 300 kHz at 250 mV power supply.
arXiv Detail & Related papers (2024-08-14T17:51:20Z) - Topology Optimization of Random Memristors for Input-Aware Dynamic SNN [44.38472635536787]
We introduce pruning optimization for input-aware dynamic memristive spiking neural network (PRIME)
Signal representation-wise, PRIME employs leaky integrate-and-fire neurons to emulate the brain's inherent spiking mechanism.
For reconfigurability, inspired by the brain's dynamic adjustment of computational depth, PRIME employs an input-aware dynamic early stop policy.
arXiv Detail & Related papers (2024-07-26T09:35:02Z) - Ternary Spike-based Neuromorphic Signal Processing System [12.32177207099149]
We take advantage of spiking neural networks (SNNs) and quantization technologies to develop an energy-efficient and lightweight neuromorphic signal processing system.
Our system is characterized by two principal innovations: a threshold-adaptive encoding (TAE) method and a quantized ternary SNN (QT-SNN)
The efficiency and efficacy of the proposed system highlight its potential as a promising avenue for energy-efficient signal processing.
arXiv Detail & Related papers (2024-07-07T09:32:19Z) - Micro-power spoken keyword spotting on Xylo Audio 2 [0.0]
We describe the implementation of a spoken audio keyword-spotting benchmark "Aloha" on the Xylo Audio 2 (SYNS61210) Neuromorphic processor device.
We obtained high deployed quantized task accuracy, (95%), exceeding the benchmark task accuracy.
We obtained best-in-class dynamic inference power ($291mu$W) and best-in-class inference efficiency ($6.6mu$J / Inf)
arXiv Detail & Related papers (2024-06-21T12:59:37Z) - Single Neuromorphic Memristor closely Emulates Multiple Synaptic
Mechanisms for Energy Efficient Neural Networks [71.79257685917058]
We demonstrate memristive nano-devices based on SrTiO3 that inherently emulate all these synaptic functions.
These memristors operate in a non-filamentary, low conductance regime, which enables stable and energy efficient operation.
arXiv Detail & Related papers (2024-02-26T15:01:54Z) - Pruning random resistive memory for optimizing analogue AI [54.21621702814583]
AI models present unprecedented challenges to energy consumption and environmental sustainability.
One promising solution is to revisit analogue computing, a technique that predates digital computing.
Here, we report a universal solution, software-hardware co-design using structural plasticity-inspired edge pruning.
arXiv Detail & Related papers (2023-11-13T08:59:01Z) - SpikingJelly: An open-source machine learning infrastructure platform
for spike-based intelligence [51.6943465041708]
Spiking neural networks (SNNs) aim to realize brain-inspired intelligence on neuromorphic chips with high energy efficiency.
We contribute a full-stack toolkit for pre-processing neuromorphic datasets, building deep SNNs, optimizing their parameters, and deploying SNNs on neuromorphic chips.
arXiv Detail & Related papers (2023-10-25T13:15:17Z) - Energy-Efficient On-Board Radio Resource Management for Satellite
Communications via Neuromorphic Computing [59.40731173370976]
We investigate the application of energy-efficient brain-inspired machine learning models for on-board radio resource management.
For relevant workloads, spiking neural networks (SNNs) implemented on Loihi 2 yield higher accuracy, while reducing power consumption by more than 100$times$ as compared to the CNN-based reference platform.
arXiv Detail & Related papers (2023-08-22T03:13:57Z) - Spikformer: When Spiking Neural Network Meets Transformer [102.91330530210037]
We consider two biologically plausible structures, the Spiking Neural Network (SNN) and the self-attention mechanism.
We propose a novel Spiking Self Attention (SSA) as well as a powerful framework, named Spiking Transformer (Spikformer)
arXiv Detail & Related papers (2022-09-29T14:16:49Z) - A Resource-efficient Spiking Neural Network Accelerator Supporting
Emerging Neural Encoding [6.047137174639418]
Spiking neural networks (SNNs) recently gained momentum due to their low-power multiplication-free computing.
SNNs require very long spike trains (up to 1000) to reach an accuracy similar to their artificial neural network (ANN) counterparts for large models.
We present a novel hardware architecture that can efficiently support SNN with emerging neural encoding.
arXiv Detail & Related papers (2022-06-06T10:56:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.