Optimizing Neural Network Scale for ECG Classification
- URL: http://arxiv.org/abs/2308.12492v1
- Date: Thu, 24 Aug 2023 01:26:31 GMT
- Title: Optimizing Neural Network Scale for ECG Classification
- Authors: Byeong Tak Lee, Yong-Yeon Jo, Joon-Myoung Kwon
- Abstract summary: We study scaling convolutional neural networks (CNNs) specifically targeting Residual neural networks (ResNet) for analyzing electrocardiograms (ECGs)
We explored and demonstrated an efficient approach to scale ResNet by examining the effects of crucial parameters, including layer depth, the number of channels, and the convolution kernel size.
Our findings provide insight into obtaining more efficient and accurate models with fewer computing resources or less time.
- Score: 1.8953148404648703
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study scaling convolutional neural networks (CNNs), specifically targeting
Residual neural networks (ResNet), for analyzing electrocardiograms (ECGs).
Although ECG signals are time-series data, CNN-based models have been shown to
outperform other neural networks with different architectures in ECG analysis.
However, most previous studies in ECG analysis have overlooked the importance
of network scaling optimization, which significantly improves performance. We
explored and demonstrated an efficient approach to scale ResNet by examining
the effects of crucial parameters, including layer depth, the number of
channels, and the convolution kernel size. Through extensive experiments, we
found that a shallower network, a larger number of channels, and smaller kernel
sizes result in better performance for ECG classifications. The optimal network
scale might differ depending on the target task, but our findings provide
insight into obtaining more efficient and accurate models with fewer computing
resources or less time. In practice, we demonstrate that a narrower search
space based on our findings leads to higher performance.
Related papers
- How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - EEGSN: Towards Efficient Low-latency Decoding of EEG with Graph Spiking
Neural Networks [4.336065967298193]
A majority of neural networks (SNNs) are trained based on inductive biases that are not necessarily a good fit for several critical tasks that require low-latency and power efficiency.
Here, we propose a graph spiking neural architecture for multi-channel EEG classification (EEGS) that learns the dynamic relational information present in the distributed EEG sensors.
Our method reduced the inference computational complexity by $times 20$ compared to the state-the-art SNNs, while achieved comparable accuracy on motor execution tasks.
arXiv Detail & Related papers (2023-04-15T23:30:17Z) - Effective classification of ecg signals using enhanced convolutional
neural network in iot [0.0]
This paper proposes a routing system for IoT healthcare platforms based on Dynamic Source Routing (DSR) and Routing by Energy and Link Quality (REL)
Deep-ECG will employ a deep CNN to extract important characteristics, which will then be compared using simple and fast distance functions.
The results show that the proposed strategy outperforms others in terms of classification accuracy.
arXiv Detail & Related papers (2022-02-08T13:37:23Z) - Classification of Motor Imagery EEG Signals by Using a Divergence Based
Convolutional Neural Network [0.0]
It is observed that the augmentation process is not applied for increasing the classification performance of EEG signals.
In this study, we have investigated the effect of the augmentation process on the classification performance of MI EEG signals.
arXiv Detail & Related papers (2021-03-19T18:27:28Z) - Binary Graph Neural Networks [69.51765073772226]
Graph Neural Networks (GNNs) have emerged as a powerful and flexible framework for representation learning on irregular data.
In this paper, we present and evaluate different strategies for the binarization of graph neural networks.
We show that through careful design of the models, and control of the training process, binary graph neural networks can be trained at only a moderate cost in accuracy on challenging benchmarks.
arXiv Detail & Related papers (2020-12-31T18:48:58Z) - An Uncertainty-Driven GCN Refinement Strategy for Organ Segmentation [53.425900196763756]
We propose a segmentation refinement method based on uncertainty analysis and graph convolutional networks.
We employ the uncertainty levels of the convolutional network in a particular input volume to formulate a semi-supervised graph learning problem.
We show that our method outperforms the state-of-the-art CRF refinement method by improving the dice score by 1% for the pancreas and 2% for spleen.
arXiv Detail & Related papers (2020-12-06T18:55:07Z) - Genetic U-Net: Automatically Designed Deep Networks for Retinal Vessel
Segmentation Using a Genetic Algorithm [2.6629444004809826]
Genetic U-Net is proposed to generate a U-shaped convolutional neural network (CNN) that can achieve better retinal vessel segmentation but with fewer architecture-based parameters.
The experimental results show that the architecture obtained using the proposed method offered a superior performance with less than 1% of the number of the original U-Net parameters in particular.
arXiv Detail & Related papers (2020-10-29T13:31:36Z) - The Heterogeneity Hypothesis: Finding Layer-Wise Differentiated Network
Architectures [179.66117325866585]
We investigate a design space that is usually overlooked, i.e. adjusting the channel configurations of predefined networks.
We find that this adjustment can be achieved by shrinking widened baseline networks and leads to superior performance.
Experiments are conducted on various networks and datasets for image classification, visual tracking and image restoration.
arXiv Detail & Related papers (2020-06-29T17:59:26Z) - KiU-Net: Towards Accurate Segmentation of Biomedical Images using
Over-complete Representations [59.65174244047216]
We propose an over-complete architecture (Ki-Net) which involves projecting the data onto higher dimensions.
This network, when augmented with U-Net, results in significant improvements in the case of segmenting small anatomical landmarks.
We evaluate the proposed method on the task of brain anatomy segmentation from 2D Ultrasound of preterm neonates.
arXiv Detail & Related papers (2020-06-08T18:59:24Z) - DRU-net: An Efficient Deep Convolutional Neural Network for Medical
Image Segmentation [2.3574651879602215]
Residual network (ResNet) and densely connected network (DenseNet) have significantly improved the training efficiency and performance of deep convolutional neural networks (DCNNs)
We propose an efficient network architecture by considering advantages of both networks.
arXiv Detail & Related papers (2020-04-28T12:16:24Z) - What Deep CNNs Benefit from Global Covariance Pooling: An Optimization
Perspective [102.37204254403038]
We make an attempt to understand what deep CNNs benefit from GCP in a viewpoint of optimization.
We show that GCP can make the optimization landscape more smooth and the gradients more predictive.
We conduct extensive experiments using various deep CNN models on diversified tasks, and the results provide strong support to our findings.
arXiv Detail & Related papers (2020-03-25T07:00:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.