A Microarchitecture Implementation Framework for Online Learning with
Temporal Neural Networks
- URL: http://arxiv.org/abs/2105.13262v1
- Date: Thu, 27 May 2021 15:59:54 GMT
- Title: A Microarchitecture Implementation Framework for Online Learning with
Temporal Neural Networks
- Authors: Harideep Nair, John Paul Shen and James E. Smith
- Abstract summary: Temporal Neural Networks (TNNs) are spiking neural networks that use time as a resource to represent and process information.
This work proposes a microarchitecture framework for implementing TNNs using standard CMOS.
- Score: 1.4530235554268331
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Temporal Neural Networks (TNNs) are spiking neural networks that use time as
a resource to represent and process information, similar to the mammalian
neocortex. In contrast to compute-intensive Deep Neural Networks that employ
separate training and inference phases, TNNs are capable of extremely efficient
online incremental/continuous learning and are excellent candidates for
building edge-native sensory processing units. This work proposes a
microarchitecture framework for implementing TNNs using standard CMOS.
Gate-level implementations of three key building blocks are presented: 1)
multi-synapse neurons, 2) multi-neuron columns, and 3) unsupervised and
supervised online learning algorithms based on Spike Timing Dependent
Plasticity (STDP). The TNN microarchitecture is embodied in a set of
characteristic scaling equations for assessing the gate count, area, delay and
power consumption for any TNN design. Post-synthesis results (in 45nm CMOS) for
the proposed designs are presented, and their online incremental learning
capability is demonstrated.
Related papers
- Neuromorphic analog circuits for robust on-chip always-on learning in
spiking neural networks [1.9809266426888898]
Mixed-signal neuromorphic systems represent a promising solution for solving extreme-edge computing tasks.
Their spiking neural network circuits are optimized for processing sensory data on-line in continuous-time.
We design on-chip learning circuits with short-term analog dynamics and long-term tristate discretization mechanisms.
arXiv Detail & Related papers (2023-07-12T11:14:25Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Training Spiking Neural Networks with Local Tandem Learning [96.32026780517097]
Spiking neural networks (SNNs) are shown to be more biologically plausible and energy efficient than their predecessors.
In this paper, we put forward a generalized learning rule, termed Local Tandem Learning (LTL)
We demonstrate rapid network convergence within five training epochs on the CIFAR-10 dataset while having low computational complexity.
arXiv Detail & Related papers (2022-10-10T10:05:00Z) - Mining the Weights Knowledge for Optimizing Neural Network Structures [1.995792341399967]
We introduce a switcher neural network (SNN) that uses as inputs the weights of a task-specific neural network (called TNN for short)
By mining the knowledge contained in the weights, the SNN outputs scaling factors for turning off neurons in the TNN.
In terms of accuracy, we outperform baseline networks and other structure learning methods stably and significantly.
arXiv Detail & Related papers (2021-10-11T05:20:56Z) - Binary Graph Neural Networks [69.51765073772226]
Graph Neural Networks (GNNs) have emerged as a powerful and flexible framework for representation learning on irregular data.
In this paper, we present and evaluate different strategies for the binarization of graph neural networks.
We show that through careful design of the models, and control of the training process, binary graph neural networks can be trained at only a moderate cost in accuracy on challenging benchmarks.
arXiv Detail & Related papers (2020-12-31T18:48:58Z) - A Temporal Neural Network Architecture for Online Learning [0.6091702876917281]
Temporal neural networks (TNNs) communicate and process information encoded as relative spike times.
A TNN architecture is proposed and, as a proof-of-concept, TNN operation is demonstrated within the larger context of online supervised classification.
arXiv Detail & Related papers (2020-11-27T17:15:29Z) - On-Chip Error-triggered Learning of Multi-layer Memristive Spiking
Neural Networks [1.7958576850695402]
We propose a local, gradient-based, error-triggered learning algorithm with online ternary weight updates.
The proposed algorithm enables online training of multi-layer SNNs with memristive neuromorphic hardware.
arXiv Detail & Related papers (2020-11-21T19:44:19Z) - Direct CMOS Implementation of Neuromorphic Temporal Neural Networks for
Sensory Processing [4.084672048082021]
Temporal Neural Networks (TNNs) use time as a resource to represent and process information, mimicking the behavior of the mammalian neocortex.
This work focuses on implementing TNNs using off-the-shelf digital CMOS technology.
arXiv Detail & Related papers (2020-08-27T20:36:34Z) - Neural Architecture Search For LF-MMI Trained Time Delay Neural Networks [61.76338096980383]
A range of neural architecture search (NAS) techniques are used to automatically learn two types of hyper- parameters of state-of-the-art factored time delay neural networks (TDNNs)
These include the DARTS method integrating architecture selection with lattice-free MMI (LF-MMI) TDNN training.
Experiments conducted on a 300-hour Switchboard corpus suggest the auto-configured systems consistently outperform the baseline LF-MMI TDNN systems.
arXiv Detail & Related papers (2020-07-17T08:32:11Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.