Sub-mW Neuromorphic SNN audio processing applications with Rockpool and
Xylo
- URL: http://arxiv.org/abs/2208.12991v2
- Date: Tue, 30 Aug 2022 13:46:30 GMT
- Title: Sub-mW Neuromorphic SNN audio processing applications with Rockpool and
Xylo
- Authors: Hannah Bos and Dylan Muir
- Abstract summary: Spiking Neural Networks (SNNs) provide an efficient computational mechanism for temporal signal processing.
SNNs have been historically difficult to configure, lacking a general method for finding solutions for arbitrary tasks.
Here we demonstrate a convenient high-level pipeline to design, train and deploy arbitrary temporal signal processing applications to sub-mW SNN inference hardware.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Spiking Neural Networks (SNNs) provide an efficient computational mechanism
for temporal signal processing, especially when coupled with low-power SNN
inference ASICs. SNNs have been historically difficult to configure, lacking a
general method for finding solutions for arbitrary tasks. In recent years,
gradient-descent optimization methods have been applied to SNNs with increasing
ease. SNNs and SNN inference processors therefore offer a good platform for
commercial low-power signal processing in energy constrained environments
without cloud dependencies. However, to date these methods have not been
accessible to ML engineers in industry, requiring graduate-level training to
successfully configure a single SNN application. Here we demonstrate a
convenient high-level pipeline to design, train and deploy arbitrary temporal
signal processing applications to sub-mW SNN inference hardware. We apply a new
straightforward SNN architecture designed for temporal signal processing, using
a pyramid of synaptic time constants to extract signal features at a range of
temporal scales. We demonstrate this architecture on an ambient audio
classification task, deployed to the Xylo SNN inference processor in streaming
mode. Our application achieves high accuracy (98%) and low latency (100ms) at
low power (<4muW inference power). Our approach makes training and deploying
SNN applications available to ML engineers with general NN backgrounds, without
requiring specific prior experience with spiking NNs. We intend for our
approach to make Neuromorphic hardware and SNNs an attractive choice for
commercial low-power and edge signal processing applications.
Related papers
- Accurate Mapping of RNNs on Neuromorphic Hardware with Adaptive Spiking Neurons [2.9410174624086025]
We present a $SigmaDelta$-low-pass RNN (lpRNN) for mapping rate-based RNNs to spiking neural networks (SNNs)
An adaptive spiking neuron model encodes signals using $SigmaDelta$-modulation and enables precise mapping.
We demonstrate the implementation of the lpRNN on Intel's neuromorphic research chip Loihi.
arXiv Detail & Related papers (2024-07-18T14:06:07Z) - Highly Efficient SNNs for High-speed Object Detection [7.3074002563489024]
Experimental results show that our efficient SNN can achieve 118X speedup on GPU with only 1.5MB parameters for object detection tasks.
We further verify our SNN on FPGA platform and the proposed model can achieve 800+FPS object detection with extremely low latency.
arXiv Detail & Related papers (2023-09-27T10:31:12Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Spiking Neural Network Decision Feedback Equalization [70.3497683558609]
We propose an SNN-based equalizer with a feedback structure akin to the decision feedback equalizer (DFE)
We show that our approach clearly outperforms conventional linear equalizers for three different exemplary channels.
The proposed SNN with a decision feedback structure enables the path to competitive energy-efficient transceivers.
arXiv Detail & Related papers (2022-11-09T09:19:15Z) - Training Spiking Neural Networks with Local Tandem Learning [96.32026780517097]
Spiking neural networks (SNNs) are shown to be more biologically plausible and energy efficient than their predecessors.
In this paper, we put forward a generalized learning rule, termed Local Tandem Learning (LTL)
We demonstrate rapid network convergence within five training epochs on the CIFAR-10 dataset while having low computational complexity.
arXiv Detail & Related papers (2022-10-10T10:05:00Z) - Spiking Approximations of the MaxPooling Operation in Deep SNNs [0.0]
Spiking Neural Networks (SNNs) are an emerging domain of biologically inspired neural networks.
We present two hardware-friendly methods to implement Max-Pooling in deep SNNs.
In a first, we also execute SNNs with spiking-MaxPooling layers on Intel's Loihi neuromorphic hardware.
arXiv Detail & Related papers (2022-05-14T14:47:10Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - Can Deep Neural Networks be Converted to Ultra Low-Latency Spiking
Neural Networks? [3.2108350580418166]
Spiking neural networks (SNNs) operate via binary spikes distributed over time.
SOTA training strategies for SNNs involve conversion from a non-spiking deep neural network (DNN)
We propose a new training algorithm that accurately captures these distributions, minimizing the error between the DNN and converted SNN.
arXiv Detail & Related papers (2021-12-22T18:47:45Z) - Sub-bit Neural Networks: Learning to Compress and Accelerate Binary
Neural Networks [72.81092567651395]
Sub-bit Neural Networks (SNNs) are a new type of binary quantization design tailored to compress and accelerate BNNs.
SNNs are trained with a kernel-aware optimization framework, which exploits binary quantization in the fine-grained convolutional kernel space.
Experiments on visual recognition benchmarks and the hardware deployment on FPGA validate the great potentials of SNNs.
arXiv Detail & Related papers (2021-10-18T11:30:29Z) - Optimal Conversion of Conventional Artificial Neural Networks to Spiking
Neural Networks [0.0]
Spiking neural networks (SNNs) are biology-inspired artificial neural networks (ANNs)
We propose a novel strategic pipeline that transfers the weights to the target SNN by combining threshold balance and soft-reset mechanisms.
Our method is promising to get implanted onto embedded platforms with better support of SNNs with limited energy and memory.
arXiv Detail & Related papers (2021-02-28T12:04:22Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.