Towards a Design Framework for TNN-Based Neuromorphic Sensory Processing
Units
- URL: http://arxiv.org/abs/2205.14248v1
- Date: Fri, 27 May 2022 21:51:05 GMT
- Title: Towards a Design Framework for TNN-Based Neuromorphic Sensory Processing
Units
- Authors: Prabhu Vellaisamy and John Paul Shen
- Abstract summary: Temporal Neural Networks (TNNs) are spiking neural networks that exhibit brain-like sensory processing with high energy efficiency.
This work presents the ongoing research towards developing a custom design framework for designing efficient application-specific TNN-based Neuromorphic Sensory Processing Units (NSPUs)
- Score: 2.419276285404291
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Temporal Neural Networks (TNNs) are spiking neural networks that exhibit
brain-like sensory processing with high energy efficiency. This work presents
the ongoing research towards developing a custom design framework for designing
efficient application-specific TNN-based Neuromorphic Sensory Processing Units
(NSPUs). This paper examines previous works on NSPU designs for UCR time-series
clustering and MNIST image classification applications. Current ideas for a
custom design framework and tools that enable efficient software-to-hardware
design flow for rapid design space exploration of application-specific NSPUs
while leveraging EDA tools to obtain post-layout netlist and
power-performance-area (PPA) metrics are described. Future research directions
are also outlined.
Related papers
- NACHOS: Neural Architecture Search for Hardware Constrained Early Exit
Neural Networks [6.279164022876874]
Early Exit Neural Networks (EENNs) endow astandard Deep Neural Network (DNN) with Early Exits (EECs)
This work presents Neural Architecture Search for Hardware Constrained Early Exit Neural Networks (NACHOS)
NACHOS is the first NAS framework for the design of optimal EENNs satisfying constraints on the accuracy and the number of Multiply and Accumulate (MAC) operations performed by the EENNs at inference time.
arXiv Detail & Related papers (2024-01-24T09:48:12Z) - Free-Space Optical Spiking Neural Network [0.0]
We introduce the Free-space Optical deep Spiking Convolutional Neural Network (OSCNN)
This novel approach draws inspiration from computational models of the human eye.
Our results demonstrate promising performance with minimal latency and power consumption compared to their electronic ONN counterparts.
arXiv Detail & Related papers (2023-11-08T09:41:14Z) - A Hybrid Neural Coding Approach for Pattern Recognition with Spiking
Neural Networks [53.31941519245432]
Brain-inspired spiking neural networks (SNNs) have demonstrated promising capabilities in solving pattern recognition tasks.
These SNNs are grounded on homogeneous neurons that utilize a uniform neural coding for information representation.
In this study, we argue that SNN architectures should be holistically designed to incorporate heterogeneous coding schemes.
arXiv Detail & Related papers (2023-05-26T02:52:12Z) - AutoPINN: When AutoML Meets Physics-Informed Neural Networks [30.798918516407376]
PINNs enable the estimation of critical parameters, which are unobservable via physical tools, through observable variables.
Existing PINNs are often manually designed, which is time-consuming and may lead to suboptimal performance.
We propose a framework that enables the automated design of PINNs by combining AutoML and PINNs.
arXiv Detail & Related papers (2022-12-08T03:44:08Z) - Fluid Batching: Exit-Aware Preemptive Serving of Early-Exit Neural
Networks on Edge NPUs [74.83613252825754]
"smart ecosystems" are being formed where sensing happens concurrently rather than standalone.
This is shifting the on-device inference paradigm towards deploying neural processing units (NPUs) at the edge.
We propose a novel early-exit scheduling that allows preemption at run time to account for the dynamicity introduced by the arrival and exiting processes.
arXiv Detail & Related papers (2022-09-27T15:04:01Z) - NeuroXplorer 1.0: An Extensible Framework for Architectural Exploration
with Spiking Neural Networks [3.9121275263540087]
We present NeuroXplorer, a framework that is based on a generalized template for modeling a neuromorphic architecture.
NeuroXplorer can perform both low-level cycle-accurate architectural simulations and high-level analysis with data-flow abstractions.
We demonstrate the architectural exploration capabilities of NeuroXplorer through case studies with many state-of-the-art machine learning models.
arXiv Detail & Related papers (2021-05-04T23:31:11Z) - Design Space for Graph Neural Networks [81.88707703106232]
We study the architectural design space for Graph Neural Networks (GNNs) which consists of 315,000 different designs over 32 different predictive tasks.
Our key results include: (1) A comprehensive set of guidelines for designing well-performing GNNs; (2) while best GNN designs for different tasks vary significantly, the GNN task space allows for transferring the best designs across different tasks; (3) models discovered using our design space achieve state-of-the-art performance.
arXiv Detail & Related papers (2020-11-17T18:59:27Z) - Neural Architecture Search of SPD Manifold Networks [79.45110063435617]
We propose a new neural architecture search (NAS) problem of Symmetric Positive Definite (SPD) manifold networks.
We first introduce a geometrically rich and diverse SPD neural architecture search space for an efficient SPD cell design.
We exploit a differentiable NAS algorithm on our relaxed continuous search space for SPD neural architecture search.
arXiv Detail & Related papers (2020-10-27T18:08:57Z) - MS-RANAS: Multi-Scale Resource-Aware Neural Architecture Search [94.80212602202518]
We propose Multi-Scale Resource-Aware Neural Architecture Search (MS-RANAS)
We employ a one-shot architecture search approach in order to obtain a reduced search cost.
We achieve state-of-the-art results in terms of accuracy-speed trade-off.
arXiv Detail & Related papers (2020-09-29T11:56:01Z) - AutoML for Multilayer Perceptron and FPGA Co-design [0.0]
State-of-the-art Neural Network Architectures (NNAs) are challenging to design and implement efficiently in hardware.
Much of the recent research in the auto-design of NNAs has focused on convolution networks and image recognition.
We develop and test a general multilayer perceptron (MLP) flow that can take arbitrary datasets as input and automatically produce optimized NNAs and hardware designs.
arXiv Detail & Related papers (2020-09-14T02:37:51Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.