Seizure Detection and Prediction by Parallel Memristive Convolutional
Neural Networks
- URL: http://arxiv.org/abs/2206.09951v1
- Date: Mon, 20 Jun 2022 18:16:35 GMT
- Title: Seizure Detection and Prediction by Parallel Memristive Convolutional
Neural Networks
- Authors: Chenqi Li, Corey Lammie, Xuening Dong, Amirali Amirsoleimani, Mostafa
Rahimi Azghadi, Roman Genov
- Abstract summary: We propose a low-latency parallel Convolutional Neural Network (CNN) architecture that has between 2-2,800x fewer network parameters compared to SOTA CNN architectures.
Our network achieves cross validation accuracy of 99.84% for epileptic seizure detection and 97.54% for epileptic seizure prediction.
The CNN component of our platform is estimated to consume approximately 2.791W of power while occupying an area of 31.255mm$2$ in a 22nm FDSOI CMOS process.
- Score: 2.0738462952016232
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: During the past two decades, epileptic seizure detection and prediction
algorithms have evolved rapidly. However, despite significant performance
improvements, their hardware implementation using conventional technologies,
such as Complementary Metal-Oxide-Semiconductor (CMOS), in power and
area-constrained settings remains a challenging task; especially when many
recording channels are used. In this paper, we propose a novel low-latency
parallel Convolutional Neural Network (CNN) architecture that has between
2-2,800x fewer network parameters compared to SOTA CNN architectures and
achieves 5-fold cross validation accuracy of 99.84% for epileptic seizure
detection, and 99.01% and 97.54% for epileptic seizure prediction, when
evaluated using the University of Bonn Electroencephalogram (EEG), CHB-MIT and
SWEC-ETHZ seizure datasets, respectively. We subsequently implement our network
onto analog crossbar arrays comprising Resistive Random-Access Memory (RRAM)
devices, and provide a comprehensive benchmark by simulating, laying out, and
determining hardware requirements of the CNN component of our system. To the
best of our knowledge, we are the first to parallelize the execution of
convolution layer kernels on separate analog crossbars to enable 2 orders of
magnitude reduction in latency compared to SOTA hybrid Memristive-CMOS DL
accelerators. Furthermore, we investigate the effects of non-idealities on our
system and investigate Quantization Aware Training (QAT) to mitigate the
performance degradation due to low ADC/DAC resolution. Finally, we propose a
stuck weight offsetting methodology to mitigate performance degradation due to
stuck RON/ROFF memristor weights, recovering up to 32% accuracy, without
requiring retraining. The CNN component of our platform is estimated to consume
approximately 2.791W of power while occupying an area of 31.255mm$^2$ in a 22nm
FDSOI CMOS process.
Related papers
- High-speed Low-consumption sEMG-based Transient-state micro-Gesture
Recognition [6.649481653007372]
The accuracy of the proposed SNN is 83.85% and 93.52% on the two datasets respectively.
The methods can be used for precise, high-speed, and low-power micro-gesture recognition tasks.
arXiv Detail & Related papers (2024-03-04T08:59:12Z) - Synaptic metaplasticity with multi-level memristive devices [1.5598974049838272]
We propose a memristor-based hardware solution for implementing metaplasticity during both inference and training.
We show that a two-layer perceptron achieves 97% and 86% accuracy on consecutive training of MNIST and Fashion-MNIST.
Our architecture is compatible with the memristor limited endurance and has a 15x reduction in memory.
arXiv Detail & Related papers (2023-06-21T09:40:25Z) - Hardware-aware Training Techniques for Improving Robustness of Ex-Situ
Neural Network Transfer onto Passive TiO2 ReRAM Crossbars [0.8553766625486795]
Training approaches that adapt techniques such as dropout, the reparametrization trick and regularization to TiO2 crossbar variabilities are proposed.
For the neural network trained using the proposed hardware-aware method, 79.5% of the test set's data points can be classified with an accuracy of 95% or higher.
arXiv Detail & Related papers (2023-05-29T13:55:02Z) - TinyAD: Memory-efficient anomaly detection for time series data in
Industrial IoT [43.207210990362825]
We propose a novel framework named Tiny Anomaly Detection (TinyAD) to efficiently facilitate onboard inference of CNNs for real-time anomaly detection.
To reduce the peak memory consumption of CNNs, we explore two complementary strategies, in-place, and patch-by-patch memory rescheduling.
Our framework can reduce peak memory consumption by 2-5x with negligible overhead.
arXiv Detail & Related papers (2023-03-07T02:56:15Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Binary Single-dimensional Convolutional Neural Network for Seizure
Prediction [4.42106872060105]
We propose a hardware-friendly network called Binary Single-dimensional Convolutional Neural Network (BSDCNN) for epileptic seizure prediction.
BSDCNN utilizes 1D convolutional kernels to improve prediction performance.
Overall area under curve, sensitivity, and false prediction rate reaches 0.915, 89.26%, 0.117/h and 0.970, 94.69%, 0.095/h on American Epilepsy Society Seizure Prediction Challenge dataset and the CHB-MIT one respectively.
arXiv Detail & Related papers (2022-06-08T09:27:37Z) - Continual Spatio-Temporal Graph Convolutional Networks [87.86552250152872]
We reformulating the Spatio-Temporal Graph Convolutional Neural Network as a Continual Inference Network.
We observe up to 109x reduction in time complexity, on- hardware accelerations of 26x, and reductions in maximum allocated memory of 52% during online inference.
arXiv Detail & Related papers (2022-03-21T14:23:18Z) - ANNETTE: Accurate Neural Network Execution Time Estimation with Stacked
Models [56.21470608621633]
We propose a time estimation framework to decouple the architectural search from the target hardware.
The proposed methodology extracts a set of models from micro- kernel and multi-layer benchmarks and generates a stacked model for mapping and network execution time estimation.
We compare estimation accuracy and fidelity of the generated mixed models, statistical models with the roofline model, and a refined roofline model for evaluation.
arXiv Detail & Related papers (2021-05-07T11:39:05Z) - FastFlowNet: A Lightweight Network for Fast Optical Flow Estimation [81.76975488010213]
Dense optical flow estimation plays a key role in many robotic vision tasks.
Current networks often occupy large number of parameters and require heavy computation costs.
Our proposed FastFlowNet works in the well-known coarse-to-fine manner with following innovations.
arXiv Detail & Related papers (2021-03-08T03:09:37Z) - One-step regression and classification with crosspoint resistive memory
arrays [62.997667081978825]
High speed, low energy computing machines are in demand to enable real-time artificial intelligence at the edge.
One-step learning is supported by simulations of the prediction of the cost of a house in Boston and the training of a 2-layer neural network for MNIST digit recognition.
Results are all obtained in one computational step, thanks to the physical, parallel, and analog computing within the crosspoint array.
arXiv Detail & Related papers (2020-05-05T08:00:07Z) - Adaptive Anomaly Detection for IoT Data in Hierarchical Edge Computing [71.86955275376604]
We propose an adaptive anomaly detection approach for hierarchical edge computing (HEC) systems to solve this problem.
We design an adaptive scheme to select one of the models based on the contextual information extracted from input data, to perform anomaly detection.
We evaluate our proposed approach using a real IoT dataset, and demonstrate that it reduces detection delay by 84% while maintaining almost the same accuracy as compared to offloading detection tasks to the cloud.
arXiv Detail & Related papers (2020-01-10T05:29:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.