A Robust Interpretable Deep Learning Classifier for Heart Anomaly
Detection Without Segmentation
- URL: http://arxiv.org/abs/2005.10480v2
- Date: Tue, 29 Sep 2020 06:42:51 GMT
- Title: A Robust Interpretable Deep Learning Classifier for Heart Anomaly
Detection Without Segmentation
- Authors: Theekshana Dissanayake, Tharindu Fernando, Simon Denman, Sridha
Sridharan, Houman Ghaemmaghami, Clinton Fookes
- Abstract summary: We argue the importance of heart sound segmentation as a prior step for heart sound classification.
We then propose a robust classifier for abnormal heart sound detection.
Our new classifier is also shown to be robust, stable and most importantly, explainable, with an accuracy of almost 100% on the widely used PhysioNet dataset.
- Score: 37.70077538403524
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traditionally, abnormal heart sound classification is framed as a three-stage
process. The first stage involves segmenting the phonocardiogram to detect
fundamental heart sounds; after which features are extracted and classification
is performed. Some researchers in the field argue the segmentation step is an
unwanted computational burden, whereas others embrace it as a prior step to
feature extraction. When comparing accuracies achieved by studies that have
segmented heart sounds before analysis with those who have overlooked that
step, the question of whether to segment heart sounds before feature extraction
is still open. In this study, we explicitly examine the importance of heart
sound segmentation as a prior step for heart sound classification, and then
seek to apply the obtained insights to propose a robust classifier for abnormal
heart sound detection. Furthermore, recognizing the pressing need for
explainable Artificial Intelligence (AI) models in the medical domain, we also
unveil hidden representations learned by the classifier using model
interpretation techniques. Experimental results demonstrate that the
segmentation plays an essential role in abnormal heart sound classification.
Our new classifier is also shown to be robust, stable and most importantly,
explainable, with an accuracy of almost 100% on the widely used PhysioNet
dataset.
Related papers
- View Classification and Object Detection in Cardiac Ultrasound to
Localize Valves via Deep Learning [0.0]
We propose a machine learning pipeline that uses deep neural networks for separate classification and localization steps.
As the first step in the pipeline, we apply view classification to echocardiograms with ten unique anatomic views of the heart.
In the second step, we apply deep learning-based object detection to both localize and identify the valves.
arXiv Detail & Related papers (2023-10-31T18:16:02Z) - A Comprehensive Survey on Heart Sound Analysis in the Deep Learning Era [54.53921568420471]
Heart sound auscultation has been applied in clinical usage for early screening of cardiovascular diseases.
Deep learning has outperformed classic machine learning in many research fields.
arXiv Detail & Related papers (2023-01-23T10:58:45Z) - A Causal Intervention Scheme for Semantic Segmentation of Quasi-periodic
Cardiovascular Signals [7.182731690965173]
We propose contrastive causal intervention (CCI) to form a novel training paradigm under a frame-level contrastive framework.
The intervention can eliminate the implicit statistical bias brought by the single attribute and lead to more objective representations.
arXiv Detail & Related papers (2022-09-19T13:54:51Z) - An Algorithm for the Labeling and Interactive Visualization of the
Cerebrovascular System of Ischemic Strokes [59.116811751334225]
VirtualDSA++ is an algorithm designed to segment and label the cerebrovascular tree on CTA scans.
We extend the labeling mechanism for the cerebral arteries to identify occluded vessels.
We present the generic concept of iterative systematic search for pathways on all nodes of said model, which enables new interactive features.
arXiv Detail & Related papers (2022-04-26T14:20:26Z) - ECG-Based Heart Arrhythmia Diagnosis Through Attentional Convolutional
Neural Networks [9.410102957429705]
We propose Attention-Based Convolutional Neural Networks (ABCNN) to work on the raw ECG signals and automatically extract the informative dependencies for accurate arrhythmia detection.
Our main task is to find the arrhythmia from normal heartbeats and, at the meantime, accurately recognize the heart diseases from five arrhythmia types.
The experimental results show that the proposed ABCNN outperforms the widely used baselines.
arXiv Detail & Related papers (2021-08-18T14:55:46Z) - Segmentation-free Heart Pathology Detection Using Deep Learning [12.065014651638943]
We propose a novel segmentation-free heart sound classification method.
Specifically, we apply discrete wavelet transform to denoise the signal, followed by feature extraction and feature reduction.
Support Vector Machines and Deep Neural Networks are utilised for classification.
arXiv Detail & Related papers (2021-08-09T16:09:30Z) - A Visual Domain Transfer Learning Approach for Heartbeat Sound
Classification [0.0]
Heart disease is the most common reason for human mortality that causes almost one-third of deaths throughout the world.
Detecting the disease early increases the chances of survival of the patient and there are several ways a sign of heart disease can be detected early.
This research proposes to convert cleansed and normalized heart sound into visual mel scale spectrograms and then using visual domain transfer learning approaches to automatically extract features and categorize between heart sounds.
arXiv Detail & Related papers (2021-07-28T09:41:38Z) - Generalized Organ Segmentation by Imitating One-shot Reasoning using
Anatomical Correlation [55.1248480381153]
We propose OrganNet which learns a generalized organ concept from a set of annotated organ classes and then transfer this concept to unseen classes.
We show that OrganNet can effectively resist the wide variations in organ morphology and produce state-of-the-art results in one-shot segmentation task.
arXiv Detail & Related papers (2021-03-30T13:41:12Z) - Noise-Resilient Automatic Interpretation of Holter ECG Recordings [67.59562181136491]
We present a three-stage process for analysing Holter recordings with robustness to noisy signal.
First stage is a segmentation neural network (NN) with gradientdecoder architecture which detects positions of heartbeats.
Second stage is a classification NN which will classify heartbeats as wide or narrow.
Third stage is a boosting decision trees (GBDT) on top of NN features that incorporates patient-wise features.
arXiv Detail & Related papers (2020-11-17T16:15:49Z) - A Global Benchmark of Algorithms for Segmenting Late Gadolinium-Enhanced
Cardiac Magnetic Resonance Imaging [90.29017019187282]
" 2018 Left Atrium Challenge" using 154 3D LGE-MRIs, currently the world's largest cardiac LGE-MRI dataset.
Analyse of the submitted algorithms using technical and biological metrics was performed.
Results show the top method achieved a dice score of 93.2% and a mean surface to a surface distance of 0.7 mm.
arXiv Detail & Related papers (2020-04-26T08:49:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.