Development of Interpretable Machine Learning Models to Detect
Arrhythmia based on ECG Data
- URL: http://arxiv.org/abs/2205.02803v2
- Date: Sat, 7 May 2022 16:19:08 GMT
- Title: Development of Interpretable Machine Learning Models to Detect
Arrhythmia based on ECG Data
- Authors: Shourya Verma
- Abstract summary: This thesis builds Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) classifiers based on state-of-the-art models.
Both global and local interpretability methods are exploited to understand the interaction between dependent and independent variables.
It was found that Grad-Cam was the most effective interpretability technique at explaining predictions of proposed CNN and LSTM models.
- Score: 0.0
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: The analysis of electrocardiogram (ECG) signals can be time consuming as it
is performed manually by cardiologists. Therefore, automation through machine
learning (ML) classification is being increasingly proposed which would allow
ML models to learn the features of a heartbeat and detect abnormalities. The
lack of interpretability hinders the application of Deep Learning in
healthcare. Through interpretability of these models, we would understand how a
machine learning algorithm makes its decisions and what patterns are being
followed for classification. This thesis builds Convolutional Neural Network
(CNN) and Long Short-Term Memory (LSTM) classifiers based on state-of-the-art
models and compares their performance and interpretability to shallow
classifiers. Here, both global and local interpretability methods are exploited
to understand the interaction between dependent and independent variables
across the entire dataset and to examine model decisions in each sample,
respectively. Partial Dependence Plots, Shapley Additive Explanations,
Permutation Feature Importance, and Gradient Weighted Class Activation Maps
(Grad-Cam) are the four interpretability techniques implemented on time-series
ML models classifying ECG rhythms. In particular, we exploit Grad-Cam, which is
a local interpretability technique and examine whether its interpretability
varies between correctly and incorrectly classified ECG beats within each
class. Furthermore, the classifiers are evaluated using K-Fold cross-validation
and Leave Groups Out techniques, and we use non-parametric statistical testing
to examine whether differences are significant. It was found that Grad-CAM was
the most effective interpretability technique at explaining predictions of
proposed CNN and LSTM models. We concluded that all high performing classifiers
looked at the QRS complex of the ECG rhythm when making predictions.
Related papers
- Selecting Interpretability Techniques for Healthcare Machine Learning models [69.65384453064829]
In healthcare there is a pursuit for employing interpretable algorithms to assist healthcare professionals in several decision scenarios.
We overview a selection of eight algorithms, both post-hoc and model-based, that can be used for such purposes.
arXiv Detail & Related papers (2024-06-14T17:49:04Z) - An Evaluation of Machine Learning Approaches for Early Diagnosis of
Autism Spectrum Disorder [0.0]
Autistic Spectrum Disorder (ASD) is a neurological disease characterized by difficulties with social interaction, communication, and repetitive activities.
This study employs diverse machine learning methods to identify crucial ASD traits, aiming to enhance and automate the diagnostic process.
arXiv Detail & Related papers (2023-09-20T21:23:37Z) - Classification and Self-Supervised Regression of Arrhythmic ECG Signals
Using Convolutional Neural Networks [13.025714736073489]
We propose a deep neural network model capable of solving regression and classification tasks.
We tested the model on the MIT-BIH Arrhythmia database.
arXiv Detail & Related papers (2022-10-25T18:11:13Z) - RandomSCM: interpretable ensembles of sparse classifiers tailored for
omics data [59.4141628321618]
We propose an ensemble learning algorithm based on conjunctions or disjunctions of decision rules.
The interpretability of the models makes them useful for biomarker discovery and patterns discovery in high dimensional data.
arXiv Detail & Related papers (2022-08-11T13:55:04Z) - Improving ECG Classification Interpretability using Saliency Maps [0.0]
We propose a method for visualizing model decisions across each class in the MIT-BIH arrhythmia dataset.
This paper highlights how these maps can be used to find problems in the model which could be affecting generalizability and model performance.
arXiv Detail & Related papers (2022-01-10T16:12:25Z) - Scalable Intervention Target Estimation in Linear Models [52.60799340056917]
Current approaches to causal structure learning either work with known intervention targets or use hypothesis testing to discover the unknown intervention targets.
This paper proposes a scalable and efficient algorithm that consistently identifies all intervention targets.
The proposed algorithm can be used to also update a given observational Markov equivalence class into the interventional Markov equivalence class.
arXiv Detail & Related papers (2021-11-15T03:16:56Z) - Calibrating Class Activation Maps for Long-Tailed Visual Recognition [60.77124328049557]
We present two effective modifications of CNNs to improve network learning from long-tailed distribution.
First, we present a Class Activation Map (CAMC) module to improve the learning and prediction of network classifiers.
Second, we investigate the use of normalized classifiers for representation learning in long-tailed problems.
arXiv Detail & Related papers (2021-08-29T05:45:03Z) - Designing ECG Monitoring Healthcare System with Federated Transfer
Learning and Explainable AI [4.694126527114577]
We design a new explainable artificial intelligence (XAI) based deep learning framework in a federated setting for ECG-based healthcare applications.
The proposed framework was trained and tested using the MIT-BIH Arrhythmia database.
arXiv Detail & Related papers (2021-05-26T11:59:44Z) - Network Classifiers Based on Social Learning [71.86764107527812]
We propose a new way of combining independently trained classifiers over space and time.
The proposed architecture is able to improve prediction performance over time with unlabeled data.
We show that this strategy results in consistent learning with high probability, and it yields a robust structure against poorly trained classifiers.
arXiv Detail & Related papers (2020-10-23T11:18:20Z) - Deducing neighborhoods of classes from a fitted model [68.8204255655161]
In this article a new kind of interpretable machine learning method is presented.
It can help to understand the partitioning of the feature space into predicted classes in a classification model using quantile shifts.
Basically, real data points (or specific points of interest) are used and the changes of the prediction after slightly raising or decreasing specific features are observed.
arXiv Detail & Related papers (2020-09-11T16:35:53Z) - Data Efficient and Weakly Supervised Computational Pathology on Whole
Slide Images [4.001273534300757]
computational pathology has the potential to enable objective diagnosis, therapeutic response prediction and identification of new morphological features of clinical relevance.
Deep learning-based computational pathology approaches either require manual annotation of gigapixel whole slide images (WSIs) in fully-supervised settings or thousands of WSIs with slide-level labels in a weakly-supervised setting.
Here we present CLAM - Clustering-constrained attention multiple instance learning.
arXiv Detail & Related papers (2020-04-20T23:00:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.