ECG Classification with a Convolutional Recurrent Neural Network
- URL: http://arxiv.org/abs/2009.13320v2
- Date: Tue, 6 Oct 2020 13:21:21 GMT
- Title: ECG Classification with a Convolutional Recurrent Neural Network
- Authors: Halla Sigurthorsdottir, J\'er\^ome Van Zaen, Ricard Delgado-Gonzalo,
Mathieu Lemay
- Abstract summary: We developed a convolutional recurrent network to classify 12-lead ECG signals for the challenge of PhysioNet/ Computing in Cardiology 2020 as team Pink Irish Hat.
The model combines convolutional and recurrent layers, takes sliding windows of ECG signals as input and yields the probability of each class as output.
Our network achieved a challenge score of 0.511 on the hidden validation set and 0.167 on the full hidden test set, ranking us 23rd out of 41 in the official ranking.
- Score: 0.13903116275861838
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We developed a convolutional recurrent neural network to classify 12-lead ECG
signals for the challenge of PhysioNet/ Computing in Cardiology 2020 as team
Pink Irish Hat. The model combines convolutional and recurrent layers, takes
sliding windows of ECG signals as input and yields the probability of each
class as output. The convolutional part extracts features from each sliding
window. The bi-directional gated recurrent unit (GRU) layer and an attention
layer aggregate these features from all windows into a single feature vector.
Finally, a dense layer outputs class probabilities. The final decision is made
using test time augmentation (TTA) and an optimized decision threshold. Several
hyperparameters of our architecture were optimized, the most important of which
turned out to be the choice of optimizer and the number of filters per
convolutional layer. Our network achieved a challenge score of 0.511 on the
hidden validation set and 0.167 on the full hidden test set, ranking us 23rd
out of 41 in the official ranking.
Related papers
- MSW-Transformer: Multi-Scale Shifted Windows Transformer Networks for
12-Lead ECG Classification [6.353064734475176]
We propose a single-layer Transformer network that uses a multi-window sliding attention mechanism at different scales to capture features in different dimensions.
A learnable feature fusion method is then proposed to integrate features from different windows to further enhance model performance.
The proposed model achieves state-of-the-art performance on five classification tasks of the PTBXL-2020 12-lead ECG dataset.
arXiv Detail & Related papers (2023-06-21T08:27:26Z) - Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - Do We Really Need a Learnable Classifier at the End of Deep Neural
Network? [118.18554882199676]
We study the potential of learning a neural network for classification with the classifier randomly as an ETF and fixed during training.
Our experimental results show that our method is able to achieve similar performances on image classification for balanced datasets.
arXiv Detail & Related papers (2022-03-17T04:34:28Z) - Optimization of Residual Convolutional Neural Network for
Electrocardiogram Classification [0.9281671380673306]
We propose to optimize the Recurrent one Dimensional Convolutional Neural Network model (R-1D-CNN) with two levels.
At the first level, a residual convolutional layer and one-dimensional convolutional neural layers are trained to learn patient-specific ECG features.
The second level is automatic and based on proposed algorithm based BO.
arXiv Detail & Related papers (2021-12-11T16:52:23Z) - Non-Gradient Manifold Neural Network [79.44066256794187]
Deep neural network (DNN) generally takes thousands of iterations to optimize via gradient descent.
We propose a novel manifold neural network based on non-gradient optimization.
arXiv Detail & Related papers (2021-06-15T06:39:13Z) - Implementing a foveal-pit inspired filter in a Spiking Convolutional
Neural Network: a preliminary study [0.0]
We have presented a Spiking Convolutional Neural Network (SCNN) that incorporates retinal foveal-pit inspired Difference of Gaussian filters and rank-order encoding.
The model is trained using a variant of the backpropagation algorithm adapted to work with spiking neurons, as implemented in the Nengo library.
The network has achieved up to 90% accuracy, where loss is calculated using the cross-entropy function.
arXiv Detail & Related papers (2021-05-29T15:28:30Z) - Self Sparse Generative Adversarial Networks [73.590634413751]
Generative Adversarial Networks (GANs) are an unsupervised generative model that learns data distribution through adversarial training.
We propose a Self Sparse Generative Adversarial Network (Self-Sparse GAN) that reduces the parameter space and alleviates the zero gradient problem.
arXiv Detail & Related papers (2021-01-26T04:49:12Z) - Multilabel 12-Lead Electrocardiogram Classification Using Gradient
Boosting Tree Ensemble [64.29529357862955]
We build an algorithm using gradient boosted tree ensembles fitted on morphology and signal processing features to classify ECG diagnosis.
For each lead, we derive features from heart rate variability, PQRST template shape, and the full signal waveform.
We join the features of all 12 leads to fit an ensemble of gradient boosting decision trees to predict probabilities of ECG instances belonging to each class.
arXiv Detail & Related papers (2020-10-21T18:11:36Z) - Combining Scatter Transform and Deep Neural Networks for Multilabel
Electrocardiogram Signal Classification [0.6117371161379209]
We incorporate a variant of the complex wavelet transform, called a scatter transform, in a deep residual neural network (ResNet)
Our approach achieved a challenge validation score of 0.640, and full test score of 0.485, placing us 4th out of 41 in the official ranking.
arXiv Detail & Related papers (2020-10-15T10:13:31Z) - DHP: Differentiable Meta Pruning via HyperNetworks [158.69345612783198]
This paper introduces a differentiable pruning method via hypernetworks for automatic network pruning.
Latent vectors control the output channels of the convolutional layers in the backbone network and act as a handle for the pruning of the layers.
Experiments are conducted on various networks for image classification, single image super-resolution, and denoising.
arXiv Detail & Related papers (2020-03-30T17:59:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.