TNNGen: Automated Design of Neuromorphic Sensory Processing Units for Time-Series Clustering
- URL: http://arxiv.org/abs/2412.17977v1
- Date: Mon, 23 Dec 2024 20:46:53 GMT
- Title: TNNGen: Automated Design of Neuromorphic Sensory Processing Units for Time-Series Clustering
- Authors: Prabhu Vellaisamy, Harideep Nair, Vamsikrishna Ratnakaram, Dhruv Gupta, John Paul Shen,
- Abstract summary: Recent works proposed a microarchitecture framework and custom macro suite for highly energy-efficient application-specific TNNs.
There is no open-source functional simulation framework for TNNs.
This paper introduces TNNGen, a pioneering effort towards automated design of TNNs from PyTorch software models to post-netlists.
- Score: 2.1041384320978267
- License:
- Abstract: Temporal Neural Networks (TNNs), a special class of spiking neural networks, draw inspiration from the neocortex in utilizing spike-timings for information processing. Recent works proposed a microarchitecture framework and custom macro suite for designing highly energy-efficient application-specific TNNs. These recent works rely on manual hardware design, a labor-intensive and time-consuming process. Further, there is no open-source functional simulation framework for TNNs. This paper introduces TNNGen, a pioneering effort towards the automated design of TNNs from PyTorch software models to post-layout netlists. TNNGen comprises a novel PyTorch functional simulator (for TNN modeling and application exploration) coupled with a Python-based hardware generator (for PyTorch-to-RTL and RTL-to-Layout conversions). Seven representative TNN designs for time-series signal clustering across diverse sensory modalities are simulated and their post-layout hardware complexity and design runtimes are assessed to demonstrate the effectiveness of TNNGen. We also highlight TNNGen's ability to accurately forecast silicon metrics without running hardware process flow.
Related papers
- Scalable Mechanistic Neural Networks [52.28945097811129]
We propose an enhanced neural network framework designed for scientific machine learning applications involving long temporal sequences.
By reformulating the original Mechanistic Neural Network (MNN) we reduce the computational time and space complexities from cubic and quadratic with respect to the sequence length, respectively, to linear.
Extensive experiments demonstrate that S-MNN matches the original MNN in precision while substantially reducing computational resources.
arXiv Detail & Related papers (2024-10-08T14:27:28Z) - SNNAX -- Spiking Neural Networks in JAX [0.8321953606016751]
Spiking Neural Networks (SNNs) simulators are essential tools to prototype biologically inspired models and neuromorphic hardware architectures.
We present SNNAX, a JAX-based framework for simulating and training such models with PyTorch-like intuitiveness and JAX-like execution speed.
arXiv Detail & Related papers (2024-09-04T16:14:14Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Sub-mW Neuromorphic SNN audio processing applications with Rockpool and
Xylo [0.0]
Spiking Neural Networks (SNNs) provide an efficient computational mechanism for temporal signal processing.
SNNs have been historically difficult to configure, lacking a general method for finding solutions for arbitrary tasks.
Here we demonstrate a convenient high-level pipeline to design, train and deploy arbitrary temporal signal processing applications to sub-mW SNN inference hardware.
arXiv Detail & Related papers (2022-08-27T11:50:32Z) - Towards a Design Framework for TNN-Based Neuromorphic Sensory Processing
Units [2.419276285404291]
Temporal Neural Networks (TNNs) are spiking neural networks that exhibit brain-like sensory processing with high energy efficiency.
This work presents the ongoing research towards developing a custom design framework for designing efficient application-specific TNN-based Neuromorphic Sensory Processing Units (NSPUs)
arXiv Detail & Related papers (2022-05-27T21:51:05Z) - Exploiting Low-Rank Tensor-Train Deep Neural Networks Based on
Riemannian Gradient Descent With Illustrations of Speech Processing [74.31472195046099]
We exploit a low-rank tensor-train deep neural network (TT-DNN) to build an end-to-end deep learning pipeline, namely LR-TT-DNN.
A hybrid model combining LR-TT-DNN with a convolutional neural network (CNN) is set up to boost the performance.
Our empirical evidence demonstrates that the LR-TT-DNN and CNN+(LR-TT-DNN) models with fewer model parameters can outperform the TT-DNN and CNN+(LR-TT-DNN) counterparts.
arXiv Detail & Related papers (2022-03-11T15:55:34Z) - Sub-bit Neural Networks: Learning to Compress and Accelerate Binary
Neural Networks [72.81092567651395]
Sub-bit Neural Networks (SNNs) are a new type of binary quantization design tailored to compress and accelerate BNNs.
SNNs are trained with a kernel-aware optimization framework, which exploits binary quantization in the fine-grained convolutional kernel space.
Experiments on visual recognition benchmarks and the hardware deployment on FPGA validate the great potentials of SNNs.
arXiv Detail & Related papers (2021-10-18T11:30:29Z) - A Microarchitecture Implementation Framework for Online Learning with
Temporal Neural Networks [1.4530235554268331]
Temporal Neural Networks (TNNs) are spiking neural networks that use time as a resource to represent and process information.
This work proposes a microarchitecture framework for implementing TNNs using standard CMOS.
arXiv Detail & Related papers (2021-05-27T15:59:54Z) - A Temporal Neural Network Architecture for Online Learning [0.6091702876917281]
Temporal neural networks (TNNs) communicate and process information encoded as relative spike times.
A TNN architecture is proposed and, as a proof-of-concept, TNN operation is demonstrated within the larger context of online supervised classification.
arXiv Detail & Related papers (2020-11-27T17:15:29Z) - Learning to Execute Programs with Instruction Pointer Attention Graph
Neural Networks [55.98291376393561]
Graph neural networks (GNNs) have emerged as a powerful tool for learning software engineering tasks.
Recurrent neural networks (RNNs) are well-suited to long sequential chains of reasoning, but they do not naturally incorporate program structure.
We introduce a novel GNN architecture, the Instruction Pointer Attention Graph Neural Networks (IPA-GNN), which improves systematic generalization on the task of learning to execute programs.
arXiv Detail & Related papers (2020-10-23T19:12:30Z) - The Microsoft Toolkit of Multi-Task Deep Neural Networks for Natural
Language Understanding [97.85957811603251]
We present MT-DNN, an open-source natural language understanding (NLU) toolkit that makes it easy for researchers and developers to train customized deep learning models.
Built upon PyTorch and Transformers, MT-DNN is designed to facilitate rapid customization for a broad spectrum of NLU tasks.
A unique feature of MT-DNN is its built-in support for robust and transferable learning using the adversarial multi-task learning paradigm.
arXiv Detail & Related papers (2020-02-19T03:05:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.