Optimization of Residual Convolutional Neural Network for
Electrocardiogram Classification
- URL: http://arxiv.org/abs/2112.06024v1
- Date: Sat, 11 Dec 2021 16:52:23 GMT
- Title: Optimization of Residual Convolutional Neural Network for
Electrocardiogram Classification
- Authors: Zeineb Fki, Boudour Ammar and Mounir Ben Ayed
- Abstract summary: We propose to optimize the Recurrent one Dimensional Convolutional Neural Network model (R-1D-CNN) with two levels.
At the first level, a residual convolutional layer and one-dimensional convolutional neural layers are trained to learn patient-specific ECG features.
The second level is automatic and based on proposed algorithm based BO.
- Score: 0.9281671380673306
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The interpretation of the electrocardiogram (ECG) gives clinical information
and helps in the assessing of the heart function. There are distinct ECG
patterns associated with a specific class of arrythmia. The convolutional
neural network is actually one of the most applied deep learning algorithms in
ECG processing. However, with deep learning models there are many more
hyperparameters to tune. Selecting an optimum or best hyperparameter for the
convolutional neural network algorithm is challenging. Often, we end up tuning
the model manually with different possible range of values until a best fit
model is obtained. Automatic hyperparameters tuning using Bayesian optimization
(BO) and evolutionary algorithms brings a solution to the harbor manual
configuration. In this paper, we propose to optimize the Recurrent one
Dimensional Convolutional Neural Network model (R-1D-CNN) with two levels. At
the first level, a residual convolutional layer and one-dimensional
convolutional neural layers are trained to learn patient-specific ECG features
over which the multilayer perceptron layers can learn to produce the final
class vectors of each input. This level is manual and aims to lower the search
space. The second level is automatic and based on proposed algorithm based BO.
Our proposed optimized R-1D-CNN architecture is evaluated on two publicly
available ECG Datasets. The experimental results display that the proposed
algorithm based BO achieves an optimum rate of 99.95\%, while the baseline
model achieves 99.70\% for the MIT-BIH database. Moreover, experiments
demonstrate that the proposed architecture fine-tuned with BO achieves a higher
accuracy than the other proposed architectures. Our architecture achieves a
good result compared to previous works and based on different experiments.
Related papers
- Evolutionary Optimization of 1D-CNN for Non-contact Respiration Pattern Classification [0.19999259391104385]
We present a deep learning-based approach for time-series respiration data classification.
We employed a 1D convolutional neural network (1D-CNN) for classification purposes.
Genetic algorithm was employed to optimize the 1D-CNN architecture to maximize classification accuracy.
arXiv Detail & Related papers (2023-12-20T13:59:43Z) - Arrhythmia Classifier Based on Ultra-Lightweight Binary Neural Network [4.8083529516303924]
We propose an ultra-lightweight binary neural network that is capable of 5-class and 17-class arrhythmia classification based on ECG signals.
Our model achieves optimal accuracy in 17-class classification and boasts a elegantly simple network architecture.
Our research showcases the potential of lightweight deep learning models in the healthcare industry.
arXiv Detail & Related papers (2023-04-04T06:47:54Z) - Prompt Tuning for Parameter-efficient Medical Image Segmentation [79.09285179181225]
We propose and investigate several contributions to achieve a parameter-efficient but effective adaptation for semantic segmentation on two medical imaging datasets.
We pre-train this architecture with a dedicated dense self-supervision scheme based on assignments to online generated prototypes.
We demonstrate that the resulting neural network model is able to attenuate the gap between fully fine-tuned and parameter-efficiently adapted models.
arXiv Detail & Related papers (2022-11-16T21:55:05Z) - Towards Theoretically Inspired Neural Initialization Optimization [66.04735385415427]
We propose a differentiable quantity, named GradCosine, with theoretical insights to evaluate the initial state of a neural network.
We show that both the training and test performance of a network can be improved by maximizing GradCosine under norm constraint.
Generalized from the sample-wise analysis into the real batch setting, NIO is able to automatically look for a better initialization with negligible cost.
arXiv Detail & Related papers (2022-10-12T06:49:16Z) - Adaptive Convolutional Dictionary Network for CT Metal Artifact
Reduction [62.691996239590125]
We propose an adaptive convolutional dictionary network (ACDNet) for metal artifact reduction.
Our ACDNet can automatically learn the prior for artifact-free CT images via training data and adaptively adjust the representation kernels for each input CT image.
Our method inherits the clear interpretability of model-based methods and maintains the powerful representation ability of learning-based methods.
arXiv Detail & Related papers (2022-05-16T06:49:36Z) - EvoPruneDeepTL: An Evolutionary Pruning Model for Transfer Learning
based Deep Neural Networks [15.29595828816055]
We propose an evolutionary pruning model for Transfer Learning based Deep Neural Networks.
EvoPruneDeepTL replaces the last fully-connected layers with sparse layers optimized by a genetic algorithm.
Results show the contribution of EvoPruneDeepTL and feature selection to the overall computational efficiency of the network.
arXiv Detail & Related papers (2022-02-08T13:07:55Z) - End-to-End Learning of Deep Kernel Acquisition Functions for Bayesian
Optimization [39.56814839510978]
We propose a meta-learning method for Bayesian optimization with neural network-based kernels.
Our model is trained by a reinforcement learning framework from multiple tasks.
In experiments using three text document datasets, we demonstrate that the proposed method achieves better BO performance than the existing methods.
arXiv Detail & Related papers (2021-11-01T00:42:31Z) - Exploiting Adam-like Optimization Algorithms to Improve the Performance
of Convolutional Neural Networks [82.61182037130405]
gradient descent (SGD) is the main approach for training deep networks.
In this work, we compare Adam based variants based on the difference between the present and the past gradients.
We have tested ensemble of networks and the fusion with ResNet50 trained with gradient descent.
arXiv Detail & Related papers (2021-03-26T18:55:08Z) - Binarizing MobileNet via Evolution-based Searching [66.94247681870125]
We propose a use of evolutionary search to facilitate the construction and training scheme when binarizing MobileNet.
Inspired by one-shot architecture search frameworks, we manipulate the idea of group convolution to design efficient 1-Bit Convolutional Neural Networks (CNNs)
Our objective is to come up with a tiny yet efficient binary neural architecture by exploring the best candidates of the group convolution.
arXiv Detail & Related papers (2020-05-13T13:25:51Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.