THOR -- A Neuromorphic Processor with 7.29G TSOP$^2$/mm$^2$Js
Energy-Throughput Efficiency
- URL: http://arxiv.org/abs/2212.01696v1
- Date: Sat, 3 Dec 2022 21:36:29 GMT
- Title: THOR -- A Neuromorphic Processor with 7.29G TSOP$^2$/mm$^2$Js
Energy-Throughput Efficiency
- Authors: Mayank Senapati, Manil Dev Gomony, Sherif Eissa, Charlotte Frenkel,
and Henk Corporaal
- Abstract summary: Neuromorphic computing using biologically inspired Spiking Neural Networks (SNNs) is a promising solution to meet Energy-Throughput (ET) efficiency needed for edge computing devices.
We present THOR, an all-digital neuromorphic processor with a novel memory hierarchy and neuron update architecture that addresses both energy consumption and throughput bottlenecks.
- Score: 2.260725478207432
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neuromorphic computing using biologically inspired Spiking Neural Networks
(SNNs) is a promising solution to meet Energy-Throughput (ET) efficiency needed
for edge computing devices. Neuromorphic hardware architectures that emulate
SNNs in analog/mixed-signal domains have been proposed to achieve
order-of-magnitude higher energy efficiency than all-digital architectures,
however at the expense of limited scalability, susceptibility to noise, complex
verification, and poor flexibility. On the other hand, state-of-the-art digital
neuromorphic architectures focus either on achieving high energy efficiency
(Joules/synaptic operation (SOP)) or throughput efficiency (SOPs/second/area),
resulting in poor ET efficiency. In this work, we present THOR, an all-digital
neuromorphic processor with a novel memory hierarchy and neuron update
architecture that addresses both energy consumption and throughput bottlenecks.
We implemented THOR in 28nm FDSOI CMOS technology and our post-layout results
demonstrate an ET efficiency of 7.29G $\text{TSOP}^2/\text{mm}^2\text{Js}$ at
0.9V, 400 MHz, which represents a 3X improvement over state-of-the-art digital
neuromorphic processors.
Related papers
- Deep-Unrolling Multidimensional Harmonic Retrieval Algorithms on Neuromorphic Hardware [78.17783007774295]
This paper explores the potential of conversion-based neuromorphic algorithms for highly accurate and energy-efficient single-snapshot multidimensional harmonic retrieval.
A novel method for converting the complex-valued convolutional layers and activations into spiking neural networks (SNNs) is developed.
The converted SNNs achieve almost five-fold power efficiency at moderate performance loss compared to the original CNNs.
arXiv Detail & Related papers (2024-12-05T09:41:33Z) - Neuromorphic Wireless Split Computing with Multi-Level Spikes [69.73249913506042]
Neuromorphic computing uses spiking neural networks (SNNs) to perform inference tasks.
embedding a small payload within each spike exchanged between spiking neurons can enhance inference accuracy without increasing energy consumption.
split computing - where an SNN is partitioned across two devices - is a promising solution.
This paper presents the first comprehensive study of a neuromorphic wireless split computing architecture that employs multi-level SNNs.
arXiv Detail & Related papers (2024-11-07T14:08:35Z) - Analog Spiking Neuron in CMOS 28 nm Towards Large-Scale Neuromorphic Processors [0.8426358786287627]
In this work, we present a low-power Leaky Integrate-and-Fire neuron design fabricated in TSMC's 28 nm CMOS technology.
The fabricated neuron consumes 1.61 fJ/spike and occupies an active area of 34 $mu m2$, leading to a maximum spiking frequency of 300 kHz at 250 mV power supply.
arXiv Detail & Related papers (2024-08-14T17:51:20Z) - Ternary Spike-based Neuromorphic Signal Processing System [12.32177207099149]
We take advantage of spiking neural networks (SNNs) and quantization technologies to develop an energy-efficient and lightweight neuromorphic signal processing system.
Our system is characterized by two principal innovations: a threshold-adaptive encoding (TAE) method and a quantized ternary SNN (QT-SNN)
The efficiency and efficacy of the proposed system highlight its potential as a promising avenue for energy-efficient signal processing.
arXiv Detail & Related papers (2024-07-07T09:32:19Z) - NeuroNAS: A Framework for Energy-Efficient Neuromorphic Compute-in-Memory Systems using Hardware-Aware Spiking Neural Architecture Search [6.006032394972252]
Spiking Neural Networks (SNNs) have demonstrated capabilities for solving diverse machine learning tasks with ultra-low power/energy consumption.
To maximize the performance and efficiency of SNN inference, Compute-in-Memory (CIM) hardware accelerators have been employed.
We propose NeuroNAS, a novel framework for developing energy-efficient neuromorphic CIM systems.
arXiv Detail & Related papers (2024-06-30T09:51:58Z) - Micro-power spoken keyword spotting on Xylo Audio 2 [0.0]
We describe the implementation of a spoken audio keyword-spotting benchmark "Aloha" on the Xylo Audio 2 (SYNS61210) Neuromorphic processor device.
We obtained high deployed quantized task accuracy, (95%), exceeding the benchmark task accuracy.
We obtained best-in-class dynamic inference power ($291mu$W) and best-in-class inference efficiency ($6.6mu$J / Inf)
arXiv Detail & Related papers (2024-06-21T12:59:37Z) - Single Neuromorphic Memristor closely Emulates Multiple Synaptic
Mechanisms for Energy Efficient Neural Networks [71.79257685917058]
We demonstrate memristive nano-devices based on SrTiO3 that inherently emulate all these synaptic functions.
These memristors operate in a non-filamentary, low conductance regime, which enables stable and energy efficient operation.
arXiv Detail & Related papers (2024-02-26T15:01:54Z) - Pruning random resistive memory for optimizing analogue AI [54.21621702814583]
AI models present unprecedented challenges to energy consumption and environmental sustainability.
One promising solution is to revisit analogue computing, a technique that predates digital computing.
Here, we report a universal solution, software-hardware co-design using structural plasticity-inspired edge pruning.
arXiv Detail & Related papers (2023-11-13T08:59:01Z) - SpikingJelly: An open-source machine learning infrastructure platform
for spike-based intelligence [51.6943465041708]
Spiking neural networks (SNNs) aim to realize brain-inspired intelligence on neuromorphic chips with high energy efficiency.
We contribute a full-stack toolkit for pre-processing neuromorphic datasets, building deep SNNs, optimizing their parameters, and deploying SNNs on neuromorphic chips.
arXiv Detail & Related papers (2023-10-25T13:15:17Z) - Spikformer: When Spiking Neural Network Meets Transformer [102.91330530210037]
We consider two biologically plausible structures, the Spiking Neural Network (SNN) and the self-attention mechanism.
We propose a novel Spiking Self Attention (SSA) as well as a powerful framework, named Spiking Transformer (Spikformer)
arXiv Detail & Related papers (2022-09-29T14:16:49Z) - A Resource-efficient Spiking Neural Network Accelerator Supporting
Emerging Neural Encoding [6.047137174639418]
Spiking neural networks (SNNs) recently gained momentum due to their low-power multiplication-free computing.
SNNs require very long spike trains (up to 1000) to reach an accuracy similar to their artificial neural network (ANN) counterparts for large models.
We present a novel hardware architecture that can efficiently support SNN with emerging neural encoding.
arXiv Detail & Related papers (2022-06-06T10:56:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.