Atrial Fibrillation Detection Using Weight-Pruned, Log-Quantised
Convolutional Neural Networks
- URL: http://arxiv.org/abs/2206.07649v1
- Date: Tue, 14 Jun 2022 11:47:04 GMT
- Title: Atrial Fibrillation Detection Using Weight-Pruned, Log-Quantised
Convolutional Neural Networks
- Authors: Xiu Qi Chang, Ann Feng Chew, Benjamin Chen Ming Choong, Shuhui Wang,
Rui Han, Wang He, Li Xiaolin, Rajesh C. Panicker, Deepu John
- Abstract summary: A convolutional neural network model is developed for detecting atrial fibrillation from electrocardiogram signals.
The model demonstrates high performance despite being trained on limited, variable-length input data.
The final model achieved a 91.1% model compression ratio while maintaining high model accuracy of 91.7% and less than 1% loss.
- Score: 25.160063477248904
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks (DNN) are a promising tool in medical applications.
However, the implementation of complex DNNs on battery-powered devices is
challenging due to high energy costs for communication. In this work, a
convolutional neural network model is developed for detecting atrial
fibrillation from electrocardiogram (ECG) signals. The model demonstrates high
performance despite being trained on limited, variable-length input data.
Weight pruning and logarithmic quantisation are combined to introduce sparsity
and reduce model size, which can be exploited for reduced data movement and
lower computational complexity. The final model achieved a 91.1% model
compression ratio while maintaining high model accuracy of 91.7% and less than
1% loss.
Related papers
- MC-QDSNN: Quantized Deep evolutionary SNN with Multi-Dendritic Compartment Neurons for Stress Detection using Physiological Signals [1.474723404975345]
This work proposes Multi-Compartment Leaky (MCLeaky) neuron as a viable alternative for efficient processing of time series data.
The proposed MCLeaky neuron based Spiking Neural Network model and its quantized variant were benchmarked against state-of-the-art (SOTA) Spiking LSTMs.
Results show that networks with MCLeaky activation neuron managed a superior accuracy of 98.8% to detect stress.
arXiv Detail & Related papers (2024-10-07T12:48:03Z) - Brain Tumor Classification on MRI in Light of Molecular Markers [61.77272414423481]
Co-deletion of the 1p/19q gene is associated with clinical outcomes in low-grade gliomas.
This study aims to utilize a specially MRI-based convolutional neural network for brain cancer detection.
arXiv Detail & Related papers (2024-09-29T07:04:26Z) - Few-Shot Transfer Learning for Individualized Braking Intent Detection on Neuromorphic Hardware [0.21847754147782888]
This work explores use of a few-shot transfer learning method to train and implement a convolutional spiking neural network (CSNN) on a BrainChip.
Results show the energy-efficiency of the neuromorphic hardware through a power reduction of over 97% with only a $1.3* increase in latency.
arXiv Detail & Related papers (2024-07-21T15:35:46Z) - Evaluating Spiking Neural Network On Neuromorphic Platform For Human
Activity Recognition [2.710807780228189]
Energy efficiency and low latency are crucial requirements for wearable AI-empowered human activity recognition systems.
Spike-based workouts recognition system can achieve a comparable accuracy to popular milliwatt RISC-V bases multi-core processor GAP8 with a traditional neural network.
arXiv Detail & Related papers (2023-08-01T18:59:06Z) - The Expressive Leaky Memory Neuron: an Efficient and Expressive Phenomenological Neuron Model Can Solve Long-Horizon Tasks [64.08042492426992]
We introduce the Expressive Memory (ELM) neuron model, a biologically inspired model of a cortical neuron.
Our ELM neuron can accurately match the aforementioned input-output relationship with under ten thousand trainable parameters.
We evaluate it on various tasks with demanding temporal structures, including the Long Range Arena (LRA) datasets.
arXiv Detail & Related papers (2023-06-14T13:34:13Z) - Solving Large-scale Spatial Problems with Convolutional Neural Networks [88.31876586547848]
We employ transfer learning to improve training efficiency for large-scale spatial problems.
We propose that a convolutional neural network (CNN) can be trained on small windows of signals, but evaluated on arbitrarily large signals with little to no performance degradation.
arXiv Detail & Related papers (2023-06-14T01:24:42Z) - Continuous time recurrent neural networks: overview and application to
forecasting blood glucose in the intensive care unit [56.801856519460465]
Continuous time autoregressive recurrent neural networks (CTRNNs) are a deep learning model that account for irregular observations.
We demonstrate the application of these models to probabilistic forecasting of blood glucose in a critical care setting.
arXiv Detail & Related papers (2023-04-14T09:39:06Z) - Robust Peak Detection for Holter ECGs by Self-Organized Operational
Neural Networks [12.773050144952593]
Deep convolutional neural networks (CNNs) have achieved state-of-the-art performance levels in Holter monitors.
In this study, we propose 1-D Self-Organized ONNs (Self-ONNs) with generative neurons.
Results demonstrate that the proposed solution achieves a 99.10% F1-score, 99.79% sensitivity, and 98.42% positive predictivity in the CPSC dataset.
arXiv Detail & Related papers (2021-09-30T19:45:06Z) - Multistage Pruning of CNN Based ECG Classifiers for Edge Devices [9.223908421919733]
Convolutional neural network (CNN) based deep learning has been used successfully to detect anomalous beats in ECG.
The computational complexity of existing CNN models prohibits them from being implemented in low-powered edge devices.
This paper presents a novel multistage pruning technique that reduces CNN model complexity with negligible loss in performance.
arXiv Detail & Related papers (2021-08-31T17:51:15Z) - Low-Precision Training in Logarithmic Number System using Multiplicative
Weight Update [49.948082497688404]
Training large-scale deep neural networks (DNNs) currently requires a significant amount of energy, leading to serious environmental impacts.
One promising approach to reduce the energy costs is representing DNNs with low-precision numbers.
We jointly design a lowprecision training framework involving a logarithmic number system (LNS) and a multiplicative weight update training method, termed LNS-Madam.
arXiv Detail & Related papers (2021-06-26T00:32:17Z) - Neural networks with late-phase weights [66.72777753269658]
We show that the solutions found by SGD can be further improved by ensembling a subset of the weights in late stages of learning.
At the end of learning, we obtain back a single model by taking a spatial average in weight space.
arXiv Detail & Related papers (2020-07-25T13:23:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.