A Custom 7nm CMOS Standard Cell Library for Implementing TNN-based
Neuromorphic Processors
- URL: http://arxiv.org/abs/2012.05419v1
- Date: Thu, 10 Dec 2020 02:31:57 GMT
- Title: A Custom 7nm CMOS Standard Cell Library for Implementing TNN-based
Neuromorphic Processors
- Authors: Harideep Nair, Prabhu Vellaisamy, Santha Bhasuthkar, and John Paul
Shen
- Abstract summary: A set of highly-optimized custom macro extensions is developed for a 7nm CMOS cell library for implementing Temporal Neural Networks (TNNs)
A TNN prototype (13,750 neurons and 315,000 synapses) for MNIST requires only 1.56mm2 die area and consumes only 1.69mW.
- Score: 1.1834561744686023
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A set of highly-optimized custom macro extensions is developed for a 7nm CMOS
cell library for implementing Temporal Neural Networks (TNNs) that can mimic
brain-like sensory processing with extreme energy efficiency. A TNN prototype
(13,750 neurons and 315,000 synapses) for MNIST requires only 1.56mm2 die area
and consumes only 1.69mW.
Related papers
- Scalable Mechanistic Neural Networks [52.28945097811129]
We propose an enhanced neural network framework designed for scientific machine learning applications involving long temporal sequences.
By reformulating the original Mechanistic Neural Network (MNN) we reduce the computational time and space complexities from cubic and quadratic with respect to the sequence length, respectively, to linear.
Extensive experiments demonstrate that S-MNN matches the original MNN in precision while substantially reducing computational resources.
arXiv Detail & Related papers (2024-10-08T14:27:28Z) - NAS-BNN: Neural Architecture Search for Binary Neural Networks [55.058512316210056]
We propose a novel neural architecture search scheme for binary neural networks, named NAS-BNN.
Our discovered binary model family outperforms previous BNNs for a wide range of operations (OPs) from 20M to 200M.
In addition, we validate the transferability of these searched BNNs on the object detection task, and our binary detectors with the searched BNNs achieve a novel state-of-the-art result, e.g., 31.6% mAP with 370M OPs, on MS dataset.
arXiv Detail & Related papers (2024-08-28T02:17:58Z) - Analog Spiking Neuron in CMOS 28 nm Towards Large-Scale Neuromorphic Processors [0.8426358786287627]
In this work, we present a low-power Leaky Integrate-and-Fire neuron design fabricated in TSMC's 28 nm CMOS technology.
The fabricated neuron consumes 1.61 fJ/spike and occupies an active area of 34 $mu m2$, leading to a maximum spiking frequency of 300 kHz at 250 mV power supply.
arXiv Detail & Related papers (2024-08-14T17:51:20Z) - Single Neuromorphic Memristor closely Emulates Multiple Synaptic
Mechanisms for Energy Efficient Neural Networks [71.79257685917058]
We demonstrate memristive nano-devices based on SrTiO3 that inherently emulate all these synaptic functions.
These memristors operate in a non-filamentary, low conductance regime, which enables stable and energy efficient operation.
arXiv Detail & Related papers (2024-02-26T15:01:54Z) - Fully Spiking Actor Network with Intra-layer Connections for
Reinforcement Learning [51.386945803485084]
We focus on the task where the agent needs to learn multi-dimensional deterministic policies to control.
Most existing spike-based RL methods take the firing rate as the output of SNNs, and convert it to represent continuous action space (i.e., the deterministic policy) through a fully-connected layer.
To develop a fully spiking actor network without any floating-point matrix operations, we draw inspiration from the non-spiking interneurons found in insects.
arXiv Detail & Related papers (2024-01-09T07:31:34Z) - TNN7: A Custom Macro Suite for Implementing Highly Optimized Designs of
Neuromorphic TNNs [2.9068923524970227]
Temporal Neural Networks (TNNs) exhibit energy-efficient online sensory processing capabilities.
This work proposes TNN7, a suite of nine highly optimized custom macros developed using a predictive 7nm Process Design Kit (PDK)
An unsupervised time-series clustering TNN delivering competitive performance can be implemented within 40 uW power and 0.05 mm2 area.
A 4-layer TNN that achieves an MNIST error rate of 1% consumes only 18 mW and 24.63 mm2.
arXiv Detail & Related papers (2022-05-16T01:03:41Z) - Sparse Compressed Spiking Neural Network Accelerator for Object
Detection [0.1246030133914898]
Spiking neural networks (SNNs) are inspired by the human brain and transmit binary spikes and highly sparse activation maps.
This paper proposes a sparse compressed spiking neural network accelerator that takes advantage of the high sparsity of activation maps and weights.
The experimental result of the neural network shows 71.5$%$ mAP with mixed (1,3) time steps on the IVS 3cls dataset.
arXiv Detail & Related papers (2022-05-02T09:56:55Z) - Event-based Video Reconstruction via Potential-assisted Spiking Neural
Network [48.88510552931186]
Bio-inspired neural networks can potentially lead to greater computational efficiency on event-driven hardware.
We propose a novel Event-based Video reconstruction framework based on a fully Spiking Neural Network (EVSNN)
We find that the spiking neurons have the potential to store useful temporal information (memory) to complete such time-dependent tasks.
arXiv Detail & Related papers (2022-01-25T02:05:20Z) - A Microarchitecture Implementation Framework for Online Learning with
Temporal Neural Networks [1.4530235554268331]
Temporal Neural Networks (TNNs) are spiking neural networks that use time as a resource to represent and process information.
This work proposes a microarchitecture framework for implementing TNNs using standard CMOS.
arXiv Detail & Related papers (2021-05-27T15:59:54Z) - Direct CMOS Implementation of Neuromorphic Temporal Neural Networks for
Sensory Processing [4.084672048082021]
Temporal Neural Networks (TNNs) use time as a resource to represent and process information, mimicking the behavior of the mammalian neocortex.
This work focuses on implementing TNNs using off-the-shelf digital CMOS technology.
arXiv Detail & Related papers (2020-08-27T20:36:34Z) - The Microsoft Toolkit of Multi-Task Deep Neural Networks for Natural
Language Understanding [97.85957811603251]
We present MT-DNN, an open-source natural language understanding (NLU) toolkit that makes it easy for researchers and developers to train customized deep learning models.
Built upon PyTorch and Transformers, MT-DNN is designed to facilitate rapid customization for a broad spectrum of NLU tasks.
A unique feature of MT-DNN is its built-in support for robust and transferable learning using the adversarial multi-task learning paradigm.
arXiv Detail & Related papers (2020-02-19T03:05:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.