Towards a Design Framework for TNN-Based Neuromorphic Sensory Processing
Units
- URL: http://arxiv.org/abs/2205.14248v1
- Date: Fri, 27 May 2022 21:51:05 GMT
- Title: Towards a Design Framework for TNN-Based Neuromorphic Sensory Processing
Units
- Authors: Prabhu Vellaisamy and John Paul Shen
- Abstract summary: Temporal Neural Networks (TNNs) are spiking neural networks that exhibit brain-like sensory processing with high energy efficiency.
This work presents the ongoing research towards developing a custom design framework for designing efficient application-specific TNN-based Neuromorphic Sensory Processing Units (NSPUs)
- Score: 2.419276285404291
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Temporal Neural Networks (TNNs) are spiking neural networks that exhibit
brain-like sensory processing with high energy efficiency. This work presents
the ongoing research towards developing a custom design framework for designing
efficient application-specific TNN-based Neuromorphic Sensory Processing Units
(NSPUs). This paper examines previous works on NSPU designs for UCR time-series
clustering and MNIST image classification applications. Current ideas for a
custom design framework and tools that enable efficient software-to-hardware
design flow for rapid design space exploration of application-specific NSPUs
while leveraging EDA tools to obtain post-layout netlist and
power-performance-area (PPA) metrics are described. Future research directions
are also outlined.
Related papers
- TNNGen: Automated Design of Neuromorphic Sensory Processing Units for Time-Series Clustering [2.1041384320978267]
Recent works proposed a microarchitecture framework and custom macro suite for highly energy-efficient application-specific TNNs.
There is no open-source functional simulation framework for TNNs.
This paper introduces TNNGen, a pioneering effort towards automated design of TNNs from PyTorch software models to post-netlists.
arXiv Detail & Related papers (2024-12-23T20:46:53Z) - SpikeAtConv: An Integrated Spiking-Convolutional Attention Architecture for Energy-Efficient Neuromorphic Vision Processing [11.687193535939798]
Spiking Neural Networks (SNNs) offer a biologically inspired alternative to conventional artificial neural networks.
SNNs have yet to achieve competitive performance on complex visual tasks, such as image classification.
This study introduces a novel SNN architecture designed to enhance efficacy and task accuracy.
arXiv Detail & Related papers (2024-11-26T13:57:38Z) - NNsight and NDIF: Democratizing Access to Open-Weight Foundation Model Internals [58.83169560132308]
We introduce NNsight and NDIF, technologies that work in tandem to enable scientific study of very large neural networks.
NNsight is an open-source system that extends PyTorch to introduce deferred remote execution.
NDIF is a scalable inference service that executes NNsight requests, allowing users to share GPU resources and pretrained models.
arXiv Detail & Related papers (2024-07-18T17:59:01Z) - NACHOS: Neural Architecture Search for Hardware Constrained Early Exit
Neural Networks [6.279164022876874]
Early Exit Neural Networks (EENNs) endow astandard Deep Neural Network (DNN) with Early Exits (EECs)
This work presents Neural Architecture Search for Hardware Constrained Early Exit Neural Networks (NACHOS)
NACHOS is the first NAS framework for the design of optimal EENNs satisfying constraints on the accuracy and the number of Multiply and Accumulate (MAC) operations performed by the EENNs at inference time.
arXiv Detail & Related papers (2024-01-24T09:48:12Z) - A Hybrid Neural Coding Approach for Pattern Recognition with Spiking
Neural Networks [53.31941519245432]
Brain-inspired spiking neural networks (SNNs) have demonstrated promising capabilities in solving pattern recognition tasks.
These SNNs are grounded on homogeneous neurons that utilize a uniform neural coding for information representation.
In this study, we argue that SNN architectures should be holistically designed to incorporate heterogeneous coding schemes.
arXiv Detail & Related papers (2023-05-26T02:52:12Z) - Fluid Batching: Exit-Aware Preemptive Serving of Early-Exit Neural
Networks on Edge NPUs [74.83613252825754]
"smart ecosystems" are being formed where sensing happens concurrently rather than standalone.
This is shifting the on-device inference paradigm towards deploying neural processing units (NPUs) at the edge.
We propose a novel early-exit scheduling that allows preemption at run time to account for the dynamicity introduced by the arrival and exiting processes.
arXiv Detail & Related papers (2022-09-27T15:04:01Z) - NeuroXplorer 1.0: An Extensible Framework for Architectural Exploration
with Spiking Neural Networks [3.9121275263540087]
We present NeuroXplorer, a framework that is based on a generalized template for modeling a neuromorphic architecture.
NeuroXplorer can perform both low-level cycle-accurate architectural simulations and high-level analysis with data-flow abstractions.
We demonstrate the architectural exploration capabilities of NeuroXplorer through case studies with many state-of-the-art machine learning models.
arXiv Detail & Related papers (2021-05-04T23:31:11Z) - Neural Architecture Search of SPD Manifold Networks [79.45110063435617]
We propose a new neural architecture search (NAS) problem of Symmetric Positive Definite (SPD) manifold networks.
We first introduce a geometrically rich and diverse SPD neural architecture search space for an efficient SPD cell design.
We exploit a differentiable NAS algorithm on our relaxed continuous search space for SPD neural architecture search.
arXiv Detail & Related papers (2020-10-27T18:08:57Z) - MS-RANAS: Multi-Scale Resource-Aware Neural Architecture Search [94.80212602202518]
We propose Multi-Scale Resource-Aware Neural Architecture Search (MS-RANAS)
We employ a one-shot architecture search approach in order to obtain a reduced search cost.
We achieve state-of-the-art results in terms of accuracy-speed trade-off.
arXiv Detail & Related papers (2020-09-29T11:56:01Z) - AutoML for Multilayer Perceptron and FPGA Co-design [0.0]
State-of-the-art Neural Network Architectures (NNAs) are challenging to design and implement efficiently in hardware.
Much of the recent research in the auto-design of NNAs has focused on convolution networks and image recognition.
We develop and test a general multilayer perceptron (MLP) flow that can take arbitrary datasets as input and automatically produce optimized NNAs and hardware designs.
arXiv Detail & Related papers (2020-09-14T02:37:51Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.