Run-time Mapping of Spiking Neural Networks to Neuromorphic Hardware
- URL: http://arxiv.org/abs/2006.06777v1
- Date: Thu, 11 Jun 2020 19:56:55 GMT
- Title: Run-time Mapping of Spiking Neural Networks to Neuromorphic Hardware
- Authors: Adarsha Balaji and Thibaut Marty and Anup Das and Francky Catthoor
- Abstract summary: We propose a design methodology to partition and map the neurons and synapses of online learning SNN-based applications to neuromorphic architectures at run-time.
Our algorithm reduces SNN mapping time by an average 780x compared to a state-of-the-art design-time based SNN partitioning approach with only 6.25% lower solution quality.
- Score: 0.44446524844395807
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we propose a design methodology to partition and map the
neurons and synapses of online learning SNN-based applications to neuromorphic
architectures at {run-time}. Our design methodology operates in two steps --
step 1 is a layer-wise greedy approach to partition SNNs into clusters of
neurons and synapses incorporating the constraints of the neuromorphic
architecture, and step 2 is a hill-climbing optimization algorithm that
minimizes the total spikes communicated between clusters, improving energy
consumption on the shared interconnect of the architecture. We conduct
experiments to evaluate the feasibility of our algorithm using synthetic and
realistic SNN-based applications. We demonstrate that our algorithm reduces SNN
mapping time by an average 780x compared to a state-of-the-art design-time
based SNN partitioning approach with only 6.25\% lower solution quality.
Related papers
- Spatial-Temporal Search for Spiking Neural Networks [32.937536365872745]
Spiking Neural Networks (SNNs) are considered as a potential candidate for the next generation of artificial intelligence.
We propose a differentiable approach to optimize SNN on both spatial and temporal dimensions.
Our methods achieve comparable classification performance of CIFAR10/100 and ImageNet with accuracies of 96.43%, 78.96%, and 70.21%, respectively.
arXiv Detail & Related papers (2024-10-24T09:32:51Z) - Sign Gradient Descent-based Neuronal Dynamics: ANN-to-SNN Conversion Beyond ReLU Network [10.760652747217668]
Spiking neural network (SNN) is studied in multidisciplinary domains to simulate neuro-scientific mechanisms.
The lack of discrete theory obstructs the practical application of SNN by limiting its performance and nonlinearity support.
We present a new optimization-theoretic perspective of the discrete dynamics of spiking neurons.
arXiv Detail & Related papers (2024-07-01T02:09:20Z) - A Hybrid Neural Coding Approach for Pattern Recognition with Spiking
Neural Networks [53.31941519245432]
Brain-inspired spiking neural networks (SNNs) have demonstrated promising capabilities in solving pattern recognition tasks.
These SNNs are grounded on homogeneous neurons that utilize a uniform neural coding for information representation.
In this study, we argue that SNN architectures should be holistically designed to incorporate heterogeneous coding schemes.
arXiv Detail & Related papers (2023-05-26T02:52:12Z) - CR-LSO: Convex Neural Architecture Optimization in the Latent Space of
Graph Variational Autoencoder with Input Convex Neural Networks [7.910915721525413]
In neural architecture search (NAS) methods based on latent space optimization (LSO), a deep generative model is trained to embed discrete neural architectures into a continuous latent space.
This paper develops a convexity architecture regularized space (CRLSO) method, which aims to regularize the learning process of space in order to obtain a convex performance mapping.
Experimental results on three popular NAS benchmarks show that CR-LSO achieves competitive evaluation results in terms of both computational complexity and performance.
arXiv Detail & Related papers (2022-11-11T01:55:11Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - Neural Architecture Search for Spiking Neural Networks [10.303676184878896]
Spiking Neural Networks (SNNs) have gained huge attention as a potential energy-efficient alternative to conventional Artificial Neural Networks (ANNs)
Most prior SNN methods use ANN-like architectures, which could provide sub-optimal performance for temporal sequence processing of binary information in SNNs.
We introduce a novel Neural Architecture Search (NAS) approach for finding better SNN architectures.
arXiv Detail & Related papers (2022-01-23T16:34:27Z) - Differentiable Neural Architecture Learning for Efficient Neural Network
Design [31.23038136038325]
We introduce a novel emph architecture parameterisation based on scaled sigmoid function.
We then propose a general emphiable Neural Architecture Learning (DNAL) method to optimize the neural architecture without the need to evaluate candidate neural networks.
arXiv Detail & Related papers (2021-03-03T02:03:08Z) - Multi-Tones' Phase Coding (MTPC) of Interaural Time Difference by
Spiking Neural Network [68.43026108936029]
We propose a pure spiking neural network (SNN) based computational model for precise sound localization in the noisy real-world environment.
We implement this algorithm in a real-time robotic system with a microphone array.
The experiment results show a mean error azimuth of 13 degrees, which surpasses the accuracy of the other biologically plausible neuromorphic approach for sound source localization.
arXiv Detail & Related papers (2020-07-07T08:22:56Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Parallelization Techniques for Verifying Neural Networks [52.917845265248744]
We introduce an algorithm based on the verification problem in an iterative manner and explore two partitioning strategies.
We also introduce a highly parallelizable pre-processing algorithm that uses the neuron activation phases to simplify the neural network verification problems.
arXiv Detail & Related papers (2020-04-17T20:21:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.