Direct CMOS Implementation of Neuromorphic Temporal Neural Networks for
Sensory Processing
- URL: http://arxiv.org/abs/2009.00457v1
- Date: Thu, 27 Aug 2020 20:36:34 GMT
- Title: Direct CMOS Implementation of Neuromorphic Temporal Neural Networks for
Sensory Processing
- Authors: Harideep Nair, John Paul Shen, James E. Smith
- Abstract summary: Temporal Neural Networks (TNNs) use time as a resource to represent and process information, mimicking the behavior of the mammalian neocortex.
This work focuses on implementing TNNs using off-the-shelf digital CMOS technology.
- Score: 4.084672048082021
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Temporal Neural Networks (TNNs) use time as a resource to represent and
process information, mimicking the behavior of the mammalian neocortex. This
work focuses on implementing TNNs using off-the-shelf digital CMOS technology.
A microarchitecture framework is introduced with a hierarchy of building blocks
including: multi-neuron columns, multi-column layers, and multi-layer TNNs. We
present the direct CMOS gate-level implementation of the multi-neuron column
model as the key building block for TNNs. Post-synthesis results are obtained
using Synopsys tools and the 45 nm CMOS standard cell library. The TNN
microarchitecture framework is embodied in a set of characteristic equations
for assessing the total gate count, die area, compute time, and power
consumption for any TNN design. We develop a multi-layer TNN prototype of 32M
gates. In 7 nm CMOS process, it consumes only 1.54 mm^2 die area and 7.26 mW
power and can process 28x28 images at 107M FPS (9.34 ns per image). We evaluate
the prototype's performance and complexity relative to a recent
state-of-the-art TNN model.
Related papers
- Scalable Mechanistic Neural Networks [52.28945097811129]
We propose an enhanced neural network framework designed for scientific machine learning applications involving long temporal sequences.
By reformulating the original Mechanistic Neural Network (MNN) we reduce the computational time and space complexities from cubic and quadratic with respect to the sequence length, respectively, to linear.
Extensive experiments demonstrate that S-MNN matches the original MNN in precision while substantially reducing computational resources.
arXiv Detail & Related papers (2024-10-08T14:27:28Z) - NAS-BNN: Neural Architecture Search for Binary Neural Networks [55.058512316210056]
We propose a novel neural architecture search scheme for binary neural networks, named NAS-BNN.
Our discovered binary model family outperforms previous BNNs for a wide range of operations (OPs) from 20M to 200M.
In addition, we validate the transferability of these searched BNNs on the object detection task, and our binary detectors with the searched BNNs achieve a novel state-of-the-art result, e.g., 31.6% mAP with 370M OPs, on MS dataset.
arXiv Detail & Related papers (2024-08-28T02:17:58Z) - SpikeSim: An end-to-end Compute-in-Memory Hardware Evaluation Tool for
Benchmarking Spiking Neural Networks [4.0300632886917]
SpikeSim is a tool that can perform realistic performance, energy, latency and area evaluation of IMC-mapped SNNs.
We propose SNN topological modifications leading to 1.24x and 10x reduction in the neuronal module's area and the overall energy-delay-product value.
arXiv Detail & Related papers (2022-10-24T01:07:17Z) - Towards a General Purpose CNN for Long Range Dependencies in
$\mathrm{N}$D [49.57261544331683]
We propose a single CNN architecture equipped with continuous convolutional kernels for tasks on arbitrary resolution, dimensionality and length without structural changes.
We show the generality of our approach by applying the same CCNN to a wide set of tasks on sequential (1$mathrmD$) and visual data (2$mathrmD$)
Our CCNN performs competitively and often outperforms the current state-of-the-art across all tasks considered.
arXiv Detail & Related papers (2022-06-07T15:48:02Z) - TNN7: A Custom Macro Suite for Implementing Highly Optimized Designs of
Neuromorphic TNNs [2.9068923524970227]
Temporal Neural Networks (TNNs) exhibit energy-efficient online sensory processing capabilities.
This work proposes TNN7, a suite of nine highly optimized custom macros developed using a predictive 7nm Process Design Kit (PDK)
An unsupervised time-series clustering TNN delivering competitive performance can be implemented within 40 uW power and 0.05 mm2 area.
A 4-layer TNN that achieves an MNIST error rate of 1% consumes only 18 mW and 24.63 mm2.
arXiv Detail & Related papers (2022-05-16T01:03:41Z) - Event-based Video Reconstruction via Potential-assisted Spiking Neural
Network [48.88510552931186]
Bio-inspired neural networks can potentially lead to greater computational efficiency on event-driven hardware.
We propose a novel Event-based Video reconstruction framework based on a fully Spiking Neural Network (EVSNN)
We find that the spiking neurons have the potential to store useful temporal information (memory) to complete such time-dependent tasks.
arXiv Detail & Related papers (2022-01-25T02:05:20Z) - A Microarchitecture Implementation Framework for Online Learning with
Temporal Neural Networks [1.4530235554268331]
Temporal Neural Networks (TNNs) are spiking neural networks that use time as a resource to represent and process information.
This work proposes a microarchitecture framework for implementing TNNs using standard CMOS.
arXiv Detail & Related papers (2021-05-27T15:59:54Z) - A Custom 7nm CMOS Standard Cell Library for Implementing TNN-based
Neuromorphic Processors [1.1834561744686023]
A set of highly-optimized custom macro extensions is developed for a 7nm CMOS cell library for implementing Temporal Neural Networks (TNNs)
A TNN prototype (13,750 neurons and 315,000 synapses) for MNIST requires only 1.56mm2 die area and consumes only 1.69mW.
arXiv Detail & Related papers (2020-12-10T02:31:57Z) - A Temporal Neural Network Architecture for Online Learning [0.6091702876917281]
Temporal neural networks (TNNs) communicate and process information encoded as relative spike times.
A TNN architecture is proposed and, as a proof-of-concept, TNN operation is demonstrated within the larger context of online supervised classification.
arXiv Detail & Related papers (2020-11-27T17:15:29Z) - Tensor train decompositions on recurrent networks [60.334946204107446]
Matrix product state (MPS) tensor trains have more attractive features than MPOs, in terms of storage reduction and computing time at inference.
We show that MPS tensor trains should be at the forefront of LSTM network compression through a theoretical analysis and practical experiments on NLP task.
arXiv Detail & Related papers (2020-06-09T18:25:39Z) - The Microsoft Toolkit of Multi-Task Deep Neural Networks for Natural
Language Understanding [97.85957811603251]
We present MT-DNN, an open-source natural language understanding (NLU) toolkit that makes it easy for researchers and developers to train customized deep learning models.
Built upon PyTorch and Transformers, MT-DNN is designed to facilitate rapid customization for a broad spectrum of NLU tasks.
A unique feature of MT-DNN is its built-in support for robust and transferable learning using the adversarial multi-task learning paradigm.
arXiv Detail & Related papers (2020-02-19T03:05:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.