ANN vs SNN: A case study for Neural Decoding in Implantable
Brain-Machine Interfaces
- URL: http://arxiv.org/abs/2312.15889v1
- Date: Tue, 26 Dec 2023 05:40:39 GMT
- Title: ANN vs SNN: A case study for Neural Decoding in Implantable
Brain-Machine Interfaces
- Authors: Biyan Zhou, Pao-Sheng Vincent Sun, and Arindam Basu
- Abstract summary: In this work, we compare different neural networks (NN) for motor decoding in terms of accuracy and implementation cost.
We further show that combining traditional signal processing techniques with machine learning ones deliver surprisingly good performance even with simple NNs.
- Score: 0.7904805552920349
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While it is important to make implantable brain-machine interfaces (iBMI)
wireless to increase patient comfort and safety, the trend of increased channel
count in recent neural probes poses a challenge due to the concomitant increase
in the data rate. Extracting information from raw data at the source by using
edge computing is a promising solution to this problem, with integrated
intention decoders providing the best compression ratio. In this work, we
compare different neural networks (NN) for motor decoding in terms of accuracy
and implementation cost. We further show that combining traditional signal
processing techniques with machine learning ones deliver surprisingly good
performance even with simple NNs. Adding a block Bidirectional Bessel filter
provided maximum gains of $\approx 0.05$, $0.04$ and $0.03$ in $R^2$ for
ANN\_3d, SNN\_3D and ANN models, while the gains were lower ($\approx 0.02$ or
less) for LSTM and SNN\_streaming models. Increasing training data helped
improve the $R^2$ of all models by $0.03-0.04$ indicating they have more
capacity for future improvement. In general, LSTM and SNN\_streaming models
occupy the high and low ends of the pareto curves (for accuracy vs.
memory/operations) respectively while SNN\_3D and ANN\_3D occupy intermediate
positions. Our work presents state of the art results for this dataset and
paves the way for decoder-integrated-implants of the future.
Related papers
- Obtaining Optimal Spiking Neural Network in Sequence Learning via CRNN-SNN Conversion [12.893883491781697]
Spiking neural networks (SNNs) are a promising alternative to conventional artificial neural networks (ANNs)
We design two sub-pipelines to support the end-to-end conversion of different structures in neural networks.
We show the effectiveness of our method over short and long timescales compared with the state-of-the-art learning- and conversion-based methods.
arXiv Detail & Related papers (2024-08-18T08:23:51Z) - One-Spike SNN: Single-Spike Phase Coding with Base Manipulation for ANN-to-SNN Conversion Loss Minimization [0.41436032949434404]
As spiking neural networks (SNNs) are event-driven, energy efficiency is higher than conventional artificial neural networks (ANNs)
In this work, we propose a single-spike phase coding as an encoding scheme that minimizes the number of spikes to transfer data between SNN layers.
Without any additional retraining or architectural constraints on ANNs, the proposed conversion method does not lose inference accuracy (0.58% on average) verified on three convolutional neural networks (CNNs) with CIFAR and ImageNet datasets.
arXiv Detail & Related papers (2024-01-30T02:00:28Z) - Memory-Efficient Reversible Spiking Neural Networks [8.05761813203348]
Spiking neural networks (SNNs) are potential competitors to artificial neural networks (ANNs)
SNNs require much more memory than ANNs, which impedes the training of deeper SNN models.
We propose the reversible spiking neural network to reduce the memory cost of intermediate activations and membrane potentials during training.
arXiv Detail & Related papers (2023-12-13T06:39:49Z) - High-performance deep spiking neural networks with 0.3 spikes per neuron [9.01407445068455]
It is hard to train biologically-inspired spiking neural networks (SNNs) than artificial neural networks (ANNs)
We show that training deep SNN models achieves the exact same performance as that of ANNs.
Our SNN accomplishes high-performance classification with less than 0.3 spikes per neuron, lending itself for an energy-efficient implementation.
arXiv Detail & Related papers (2023-06-14T21:01:35Z) - Low Latency Conversion of Artificial Neural Network Models to
Rate-encoded Spiking Neural Networks [11.300257721586432]
Spiking neural networks (SNNs) are well suited for resource-constrained applications.
In a typical rate-encoded SNN, a series of binary spikes within a globally fixed time window is used to fire the neurons.
The aim of this paper is to reduce this while maintaining accuracy when converting ANNs to their equivalent SNNs.
arXiv Detail & Related papers (2022-10-27T08:13:20Z) - SNN2ANN: A Fast and Memory-Efficient Training Framework for Spiking
Neural Networks [117.56823277328803]
Spiking neural networks are efficient computation models for low-power environments.
We propose a SNN-to-ANN (SNN2ANN) framework to train the SNN in a fast and memory-efficient way.
Experiment results show that our SNN2ANN-based models perform well on the benchmark datasets.
arXiv Detail & Related papers (2022-06-19T16:52:56Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - Sub-bit Neural Networks: Learning to Compress and Accelerate Binary
Neural Networks [72.81092567651395]
Sub-bit Neural Networks (SNNs) are a new type of binary quantization design tailored to compress and accelerate BNNs.
SNNs are trained with a kernel-aware optimization framework, which exploits binary quantization in the fine-grained convolutional kernel space.
Experiments on visual recognition benchmarks and the hardware deployment on FPGA validate the great potentials of SNNs.
arXiv Detail & Related papers (2021-10-18T11:30:29Z) - Fully Spiking Variational Autoencoder [66.58310094608002]
Spiking neural networks (SNNs) can be run on neuromorphic devices with ultra-high speed and ultra-low energy consumption.
In this study, we build a variational autoencoder (VAE) with SNN to enable image generation.
arXiv Detail & Related papers (2021-09-26T06:10:14Z) - ANNETTE: Accurate Neural Network Execution Time Estimation with Stacked
Models [56.21470608621633]
We propose a time estimation framework to decouple the architectural search from the target hardware.
The proposed methodology extracts a set of models from micro- kernel and multi-layer benchmarks and generates a stacked model for mapping and network execution time estimation.
We compare estimation accuracy and fidelity of the generated mixed models, statistical models with the roofline model, and a refined roofline model for evaluation.
arXiv Detail & Related papers (2021-05-07T11:39:05Z) - Deep Time Delay Neural Network for Speech Enhancement with Full Data
Learning [60.20150317299749]
This paper proposes a deep time delay neural network (TDNN) for speech enhancement with full data learning.
To make full use of the training data, we propose a full data learning method for speech enhancement.
arXiv Detail & Related papers (2020-11-11T06:32:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.