Analog Spiking Neuron in CMOS 28 nm Towards Large-Scale Neuromorphic Processors
- URL: http://arxiv.org/abs/2408.07734v1
- Date: Wed, 14 Aug 2024 17:51:20 GMT
- Title: Analog Spiking Neuron in CMOS 28 nm Towards Large-Scale Neuromorphic Processors
- Authors: Marwan Besrour, Jacob Lavoie, Takwa Omrani, Gabriel Martin-Hardy, Esmaeil Ranjbar Koleibi, Jeremy Menard, Konin Koua, Philippe Marcoux, Mounir Boukadoum, Rejean Fontaine,
- Abstract summary: In this work, we present a low-power Leaky Integrate-and-Fire neuron design fabricated in TSMC's 28 nm CMOS technology.
The fabricated neuron consumes 1.61 fJ/spike and occupies an active area of 34 $mu m2$, leading to a maximum spiking frequency of 300 kHz at 250 mV power supply.
- Score: 0.8426358786287627
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The computational complexity of deep learning algorithms has given rise to significant speed and memory challenges for the execution hardware. In energy-limited portable devices, highly efficient processing platforms are indispensable for reproducing the prowess afforded by much bulkier processing platforms. In this work, we present a low-power Leaky Integrate-and-Fire (LIF) neuron design fabricated in TSMC's 28 nm CMOS technology as proof of concept to build an energy-efficient mixed-signal Neuromorphic System-on-Chip (NeuroSoC). The fabricated neuron consumes 1.61 fJ/spike and occupies an active area of 34 $\mu m^{2}$, leading to a maximum spiking frequency of 300 kHz at 250 mV power supply. These performances are used in a software model to emulate the dynamics of a Spiking Neural Network (SNN). Employing supervised backpropagation and a surrogate gradient technique, the resulting accuracy on the MNIST dataset, using 4-bit post-training quantization stands at 82.5\%. The approach underscores the potential of such ASIC implementation of quantized SNNs to deliver high-performance, energy-efficient solutions to various embedded machine-learning applications.
Related papers
- Scalable Mechanistic Neural Networks [52.28945097811129]
We propose an enhanced neural network framework designed for scientific machine learning applications involving long temporal sequences.
By reformulating the original Mechanistic Neural Network (MNN) we reduce the computational time and space complexities from cubic and quadratic with respect to the sequence length, respectively, to linear.
Extensive experiments demonstrate that S-MNN matches the original MNN in precision while substantially reducing computational resources.
arXiv Detail & Related papers (2024-10-08T14:27:28Z) - Mem-elements based Neuromorphic Hardware for Neural Network Application [0.0]
The thesis investigates the utilization of memristive and memcapacitive crossbar arrays in low-power machine learning accelerators, offering a comprehensive co-design framework for deep neural networks (DNN)
The model, implemented through a hybrid Python and PyTorch approach, accounts for various non-idealities, achieving exceptional training accuracies of 90.02% and 91.03% for the CIFAR-10 dataset with memristive and memcapacitive crossbar arrays on an 8-layer VGG network.
arXiv Detail & Related papers (2024-03-05T14:28:40Z) - Single Neuromorphic Memristor closely Emulates Multiple Synaptic
Mechanisms for Energy Efficient Neural Networks [71.79257685917058]
We demonstrate memristive nano-devices based on SrTiO3 that inherently emulate all these synaptic functions.
These memristors operate in a non-filamentary, low conductance regime, which enables stable and energy efficient operation.
arXiv Detail & Related papers (2024-02-26T15:01:54Z) - EPIM: Efficient Processing-In-Memory Accelerators based on Epitome [78.79382890789607]
We introduce the Epitome, a lightweight neural operator offering convolution-like functionality.
On the software side, we evaluate epitomes' latency and energy on PIM accelerators.
We introduce a PIM-aware layer-wise design method to enhance their hardware efficiency.
arXiv Detail & Related papers (2023-11-12T17:56:39Z) - SpikingJelly: An open-source machine learning infrastructure platform
for spike-based intelligence [51.6943465041708]
Spiking neural networks (SNNs) aim to realize brain-inspired intelligence on neuromorphic chips with high energy efficiency.
We contribute a full-stack toolkit for pre-processing neuromorphic datasets, building deep SNNs, optimizing their parameters, and deploying SNNs on neuromorphic chips.
arXiv Detail & Related papers (2023-10-25T13:15:17Z) - THOR -- A Neuromorphic Processor with 7.29G TSOP$^2$/mm$^2$Js
Energy-Throughput Efficiency [2.260725478207432]
Neuromorphic computing using biologically inspired Spiking Neural Networks (SNNs) is a promising solution to meet Energy-Throughput (ET) efficiency needed for edge computing devices.
We present THOR, an all-digital neuromorphic processor with a novel memory hierarchy and neuron update architecture that addresses both energy consumption and throughput bottlenecks.
arXiv Detail & Related papers (2022-12-03T21:36:29Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - A Resource-efficient Spiking Neural Network Accelerator Supporting
Emerging Neural Encoding [6.047137174639418]
Spiking neural networks (SNNs) recently gained momentum due to their low-power multiplication-free computing.
SNNs require very long spike trains (up to 1000) to reach an accuracy similar to their artificial neural network (ANN) counterparts for large models.
We present a novel hardware architecture that can efficiently support SNN with emerging neural encoding.
arXiv Detail & Related papers (2022-06-06T10:56:25Z) - FPGA-optimized Hardware acceleration for Spiking Neural Networks [69.49429223251178]
This work presents the development of a hardware accelerator for an SNN, with off-line training, applied to an image recognition task.
The design targets a Xilinx Artix-7 FPGA, using in total around the 40% of the available hardware resources.
It reduces the classification time by three orders of magnitude, with a small 4.5% impact on the accuracy, if compared to its software, full precision counterpart.
arXiv Detail & Related papers (2022-01-18T13:59:22Z) - FSpiNN: An Optimization Framework for Memory- and Energy-Efficient
Spiking Neural Networks [14.916996986290902]
Spiking Neural Networks (SNNs) offer unsupervised learning capability due to the spike-timing-dependent plasticity (STDP) rule.
However, state-of-the-art SNNs require a large memory footprint to achieve high accuracy.
We propose FSpiNN, an optimization framework for obtaining memory- and energy-efficient SNNs for training and inference processing.
arXiv Detail & Related papers (2020-07-17T09:40:26Z) - A 28-nm Convolutional Neuromorphic Processor Enabling Online Learning
with Spike-Based Retinas [1.4425878137951236]
We propose SPOON, a 28-nm event-driven CNN (eCNN) for adaptive edge computing and vision applications.
It embeds online learning with only 16.8-% power and 11.8-% area overheads with the biologically-plausible direct random target projection (DRTP) algorithm.
With an energy per classification of 313nJ at 0.6V and a 0.32-mm$2$ area for accuracies of 95.3% (on-chip training) and 97.5% (off-chip training) on MNIST, we demonstrate that SPOON reaches the efficiency of conventional machine learning accelerators.
arXiv Detail & Related papers (2020-05-13T13:47:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.