MetaVA: Curriculum Meta-learning and Pre-fine-tuning of Deep Neural
Networks for Detecting Ventricular Arrhythmias based on ECGs
- URL: http://arxiv.org/abs/2202.12450v2
- Date: Tue, 1 Mar 2022 02:05:59 GMT
- Title: MetaVA: Curriculum Meta-learning and Pre-fine-tuning of Deep Neural
Networks for Detecting Ventricular Arrhythmias based on ECGs
- Authors: Wenrui Zhang, Shijia Geng, Zhaoji Fu, Linlin Zheng, Chenyang Jiang,
Shenda Hong
- Abstract summary: Ventricular arrhythmias (VA) are the main causes of sudden cardiac death.
We propose a novel model agnostic meta-learning (MAML) with curriculum learning (CL) method to solve group-level diversity.
We conduct experiments using a combination of three publicly available ECG datasets.
- Score: 9.600976281032862
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ventricular arrhythmias (VA) are the main causes of sudden cardiac death.
Developing machine learning methods for detecting VA based on
electrocardiograms (ECGs) can help save people's lives. However, developing
such machine learning models for ECGs is challenging because of the following:
1) group-level diversity from different subjects and 2) individual-level
diversity from different moments of a single subject. In this study, we aim to
solve these problems in the pre-training and fine-tuning stages. For the
pre-training stage, we propose a novel model agnostic meta-learning (MAML) with
curriculum learning (CL) method to solve group-level diversity. MAML is
expected to better transfer the knowledge from a large dataset and use only a
few recordings to quickly adapt the model to a new person. CL is supposed to
further improve MAML by meta-learning from easy to difficult tasks. For the
fine-tuning stage, we propose improved pre-fine-tuning to solve
individual-level diversity. We conduct experiments using a combination of three
publicly available ECG datasets. The results show that our method outperforms
the compared methods in terms of all evaluation metrics. Ablation studies show
that MAML and CL could help perform more evenly, and pre-fine-tuning could
better fit the model to training data.
Related papers
- Multi-omics data integration for early diagnosis of hepatocellular carcinoma (HCC) using machine learning [8.700808005009806]
We compare the performance of ensemble machine learning algorithms capable of late integration of multi-class data from different modalities.
Two boosted methods, PB-MVBoost and Adaboost with a soft vote were the overall best performing models.
arXiv Detail & Related papers (2024-09-20T09:38:02Z) - Boosting Few-Shot Learning with Disentangled Self-Supervised Learning and Meta-Learning for Medical Image Classification [8.975676404678374]
We present a strategy for improving the performance and generalization capabilities of models trained in low-data regimes.
The proposed method starts with a pre-training phase, where features learned in a self-supervised learning setting are disentangled to improve the robustness of the representations for downstream tasks.
We then introduce a meta-fine-tuning step, leveraging related classes between meta-training and meta-testing phases but varying the level.
arXiv Detail & Related papers (2024-03-26T09:36:20Z) - MELEP: A Novel Predictive Measure of Transferability in Multi-Label ECG Diagnosis [1.3654846342364306]
We introduce MELEP, a measure designed to estimate the effectiveness of knowledge transfer from a pre-trained model to a downstream ECG diagnosis task.
Our experiments show that MELEP can predict the performance of pre-trained convolutional and recurrent deep neural networks, on small and imbalanced ECG data.
arXiv Detail & Related papers (2023-10-27T14:57:10Z) - Improving Multiple Sclerosis Lesion Segmentation Across Clinical Sites:
A Federated Learning Approach with Noise-Resilient Training [75.40980802817349]
Deep learning models have shown promise for automatically segmenting MS lesions, but the scarcity of accurately annotated data hinders progress in this area.
We introduce a Decoupled Hard Label Correction (DHLC) strategy that considers the imbalanced distribution and fuzzy boundaries of MS lesions.
We also introduce a Centrally Enhanced Label Correction (CELC) strategy, which leverages the aggregated central model as a correction teacher for all sites.
arXiv Detail & Related papers (2023-08-31T00:36:10Z) - Graph-Ensemble Learning Model for Multi-label Skin Lesion Classification
using Dermoscopy and Clinical Images [7.159532626507458]
This study introduces a Graph Convolution Network (GCN) to exploit prior co-occurrence between each category as a correlation matrix into the deep learning model for the multi-label classification.
We propose a Graph-Ensemble Learning Model (GELN) that views the prediction from GCN as complementary information of the predictions from the fusion model.
arXiv Detail & Related papers (2023-07-04T13:19:57Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - Learnable Weight Initialization for Volumetric Medical Image Segmentation [66.3030435676252]
We propose a learnable weight-based hybrid medical image segmentation approach.
Our approach is easy to integrate into any hybrid model and requires no external training data.
Experiments on multi-organ and lung cancer segmentation tasks demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2023-06-15T17:55:05Z) - Convolutional Monge Mapping Normalization for learning on sleep data [63.22081662149488]
We propose a new method called Convolutional Monge Mapping Normalization (CMMN)
CMMN consists in filtering the signals in order to adapt their power spectrum density (PSD) to a Wasserstein barycenter estimated on training data.
Numerical experiments on sleep EEG data show that CMMN leads to significant and consistent performance gains independent from the neural network architecture.
arXiv Detail & Related papers (2023-05-30T08:24:01Z) - Competence-based Multimodal Curriculum Learning for Medical Report
Generation [98.10763792453925]
We propose a Competence-based Multimodal Curriculum Learning framework ( CMCL) to alleviate the data bias and make best use of available data.
Specifically, CMCL simulates the learning process of radiologists and optimize the model in a step by step manner.
Experiments on the public IU-Xray and MIMIC-CXR datasets show that CMCL can be incorporated into existing models to improve their performance.
arXiv Detail & Related papers (2022-06-24T08:16:01Z) - Survival Prediction of Heart Failure Patients using Stacked Ensemble
Machine Learning Algorithm [0.0]
Heart failure is one of the major health hazard issues of our time and is a leading cause of death worldwide.
Data mining is the process of converting massive volumes of raw data created by the healthcare institutions into meaningful information.
Our study shows that only certain attributes collected from the patients are imperative to successfully predict the surviving possibility post heart failure.
arXiv Detail & Related papers (2021-08-30T16:42:27Z) - A Multi-Stage Attentive Transfer Learning Framework for Improving
COVID-19 Diagnosis [49.3704402041314]
We propose a multi-stage attentive transfer learning framework for improving COVID-19 diagnosis.
Our proposed framework consists of three stages to train accurate diagnosis models through learning knowledge from multiple source tasks and data of different domains.
Importantly, we propose a novel self-supervised learning method to learn multi-scale representations for lung CT images.
arXiv Detail & Related papers (2021-01-14T01:39:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.