A New Neuromorphic Computing Approach for Epileptic Seizure Prediction
- URL: http://arxiv.org/abs/2102.12773v1
- Date: Thu, 25 Feb 2021 10:39:18 GMT
- Title: A New Neuromorphic Computing Approach for Epileptic Seizure Prediction
- Authors: Fengshi Tian, Jie Yang, Shiqi Zhao, Mohamad Sawan
- Abstract summary: CNNs are computationally expensive and power hungry.
Motivated by the energy-efficient spiking neural networks (SNNs), a neuromorphic computing approach for seizure prediction is proposed in this work.
- Score: 4.798958633851825
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Several high specificity and sensitivity seizure prediction methods with
convolutional neural networks (CNNs) are reported. However, CNNs are
computationally expensive and power hungry. These inconveniences make CNN-based
methods hard to be implemented on wearable devices. Motivated by the
energy-efficient spiking neural networks (SNNs), a neuromorphic computing
approach for seizure prediction is proposed in this work. This approach uses a
designed gaussian random discrete encoder to generate spike sequences from the
EEG samples and make predictions in a spiking convolutional neural network
(Spiking-CNN) which combines the advantages of CNNs and SNNs. The experimental
results show that the sensitivity, specificity and AUC can remain 95.1%, 99.2%
and 0.912 respectively while the computation complexity is reduced by 98.58%
compared to CNN, indicating that the proposed Spiking-CNN is hardware friendly
and of high precision.
Related papers
- RSC-SNN: Exploring the Trade-off Between Adversarial Robustness and Accuracy in Spiking Neural Networks via Randomized Smoothing Coding [17.342181435229573]
Spiking Neural Networks (SNNs) have received widespread attention due to their unique neuronal dynamics and low-power nature.
Previous research empirically shows that SNNs with Poisson coding are more robust than Artificial Neural Networks (ANNs) on small-scale datasets.
This work theoretically demonstrates that SNN's inherent adversarial robustness stems from its Poisson coding.
arXiv Detail & Related papers (2024-07-29T15:26:15Z) - Advancing Spiking Neural Networks for Sequential Modeling with Central Pattern Generators [47.371024581669516]
Spiking neural networks (SNNs) represent a promising approach to developing artificial neural networks.
Applying SNNs to sequential tasks, such as text classification and time-series forecasting, has been hindered by the challenge of creating an effective and hardware-friendly spike-form positional encoding strategy.
We propose a novel PE technique for SNNs, termed CPG-PE. We demonstrate that the commonly used sinusoidal PE is mathematically a specific solution to the membrane potential dynamics of a particular CPG.
arXiv Detail & Related papers (2024-05-23T09:39:12Z) - Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - A Hybrid Neural Coding Approach for Pattern Recognition with Spiking
Neural Networks [53.31941519245432]
Brain-inspired spiking neural networks (SNNs) have demonstrated promising capabilities in solving pattern recognition tasks.
These SNNs are grounded on homogeneous neurons that utilize a uniform neural coding for information representation.
In this study, we argue that SNN architectures should be holistically designed to incorporate heterogeneous coding schemes.
arXiv Detail & Related papers (2023-05-26T02:52:12Z) - Spikingformer: Spike-driven Residual Learning for Transformer-based
Spiking Neural Network [19.932683405796126]
Spiking neural networks (SNNs) offer a promising energy-efficient alternative to artificial neural networks.
SNNs suffer from non-spike computations caused by the structure of their residual connection.
We develop Spikingformer, a pure transformer-based spiking neural network.
arXiv Detail & Related papers (2023-04-24T09:44:24Z) - PC-SNN: Supervised Learning with Local Hebbian Synaptic Plasticity based
on Predictive Coding in Spiking Neural Networks [1.6172800007896282]
We propose a novel learning algorithm inspired by predictive coding theory.
We show that it can perform supervised learning fully autonomously and successfully as the backprop.
This method achieves a favorable performance compared to the state-of-the-art multi-layer SNNs.
arXiv Detail & Related papers (2022-11-24T09:56:02Z) - Continuous approximation by convolutional neural networks with a
sigmoidal function [0.0]
We present a class of convolutional neural networks (CNNs) called non-overlapping CNNs.
We prove that such networks with sigmoidal activation function are capable of approximating arbitrary continuous function defined on compact input sets with any desired degree of accuracy.
arXiv Detail & Related papers (2022-09-27T12:31:36Z) - Convolutional Spiking Neural Networks for Detecting Anticipatory Brain Potentials Using Electroencephalogram [0.21847754147782888]
Spiking neural networks (SNNs) are receiving increased attention because they mimic synaptic connections in biological systems and produce spike trains.
Recently, the addition of convolutional layers to combine the feature extraction power of convolutional networks with the computational efficiency of SNNs has been introduced.
This paper studies the feasibility of using a convolutional spiking neural network (CSNN) to detect anticipatory slow cortical potentials.
arXiv Detail & Related papers (2022-08-14T19:04:15Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by
Adversarial Attacks [65.2021953284622]
We study robustness of CNNs against white-box and black-box adversarial attacks.
Results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks.
arXiv Detail & Related papers (2021-03-14T20:43:19Z) - Approximation and Non-parametric Estimation of ResNet-type Convolutional
Neural Networks [52.972605601174955]
We show a ResNet-type CNN can attain the minimax optimal error rates in important function classes.
We derive approximation and estimation error rates of the aformentioned type of CNNs for the Barron and H"older classes.
arXiv Detail & Related papers (2019-03-24T19:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.