Interpretable Deep Models for Cardiac Resynchronisation Therapy Response
Prediction
- URL: http://arxiv.org/abs/2006.13811v2
- Date: Thu, 9 Jul 2020 10:59:55 GMT
- Title: Interpretable Deep Models for Cardiac Resynchronisation Therapy Response
Prediction
- Authors: Esther Puyol-Ant\'on, Chen Chen, James R. Clough, Bram Ruijsink,
Baldeep S. Sidhu, Justin Gould, Bradley Porter, Mark Elliott, Vishal Mehta,
Daniel Rueckert, Christopher A. Rinaldi, and Andrew P. King
- Abstract summary: We propose a novel framework for image-based classification based on a variational autoencoder (VAE)
The VAE disentangles the latent space based on explanations' drawn from existing clinical knowledge.
We demonstrate our framework on the problem of predicting response of patients with cardiomyopathy to cardiac resynchronization therapy (CRT) from cine cardiac magnetic resonance images.
- Score: 8.152884957975354
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Advances in deep learning (DL) have resulted in impressive accuracy in some
medical image classification tasks, but often deep models lack
interpretability. The ability of these models to explain their decisions is
important for fostering clinical trust and facilitating clinical translation.
Furthermore, for many problems in medicine there is a wealth of existing
clinical knowledge to draw upon, which may be useful in generating
explanations, but it is not obvious how this knowledge can be encoded into DL
models - most models are learnt either from scratch or using transfer learning
from a different domain. In this paper we address both of these issues. We
propose a novel DL framework for image-based classification based on a
variational autoencoder (VAE). The framework allows prediction of the output of
interest from the latent space of the autoencoder, as well as visualisation (in
the image domain) of the effects of crossing the decision boundary, thus
enhancing the interpretability of the classifier. Our key contribution is that
the VAE disentangles the latent space based on `explanations' drawn from
existing clinical knowledge. The framework can predict outputs as well as
explanations for these outputs, and also raises the possibility of discovering
new biomarkers that are separate (or disentangled) from the existing knowledge.
We demonstrate our framework on the problem of predicting response of patients
with cardiomyopathy to cardiac resynchronization therapy (CRT) from cine
cardiac magnetic resonance images. The sensitivity and specificity of the
proposed model on the task of CRT response prediction are 88.43% and 84.39%
respectively, and we showcase the potential of our model in enhancing
understanding of the factors contributing to CRT response.
Related papers
- Counterfactual Explanations for Medical Image Classification and Regression using Diffusion Autoencoder [38.81441978142279]
We propose a novel method that operates directly on the latent space of a generative model, specifically a Diffusion Autoencoder (DAE)
This approach offers inherent interpretability by enabling the generation of Counterfactual explanations (CEs)
We show that these latent representations are helpful for medical condition classification and the ordinal regression of pathologies, such as vertebral compression fractures (VCF) and diabetic retinopathy (DR)
arXiv Detail & Related papers (2024-08-02T21:01:30Z) - Attribute Regularized Soft Introspective Variational Autoencoder for
Interpretable Cardiac Disease Classification [2.4828003234992666]
Interpretability is essential to ensure that clinicians can comprehend and trust artificial intelligence models.
We propose a novel interpretable approach that combines attribute regularization of the latent space within the framework of an adversarially trained variational autoencoder.
arXiv Detail & Related papers (2023-12-14T13:20:57Z) - Robust and Interpretable Medical Image Classifiers via Concept
Bottleneck Models [49.95603725998561]
We propose a new paradigm to build robust and interpretable medical image classifiers with natural language concepts.
Specifically, we first query clinical concepts from GPT-4, then transform latent image features into explicit concepts with a vision-language model.
arXiv Detail & Related papers (2023-10-04T21:57:09Z) - Benchmarking Heterogeneous Treatment Effect Models through the Lens of
Interpretability [82.29775890542967]
Estimating personalized effects of treatments is a complex, yet pervasive problem.
Recent developments in the machine learning literature on heterogeneous treatment effect estimation gave rise to many sophisticated, but opaque, tools.
We use post-hoc feature importance methods to identify features that influence the model's predictions.
arXiv Detail & Related papers (2022-06-16T17:59:05Z) - Cross-modal Clinical Graph Transformer for Ophthalmic Report Generation [116.87918100031153]
We propose a Cross-modal clinical Graph Transformer (CGT) for ophthalmic report generation (ORG)
CGT injects clinical relation triples into the visual features as prior knowledge to drive the decoding procedure.
Experiments on the large-scale FFA-IR benchmark demonstrate that the proposed CGT is able to outperform previous benchmark methods.
arXiv Detail & Related papers (2022-06-04T13:16:30Z) - Feature visualization for convolutional neural network models trained on
neuroimaging data [0.0]
We show for the first time results using feature visualization of convolutional neural networks (CNNs)
We have trained CNNs for different tasks including sex classification and artificial lesion classification based on structural magnetic resonance imaging (MRI) data.
The resulting images reveal the learned concepts of the artificial lesions, including their shapes, but remain hard to interpret for abstract features in the sex classification task.
arXiv Detail & Related papers (2022-03-24T15:24:38Z) - Variational Knowledge Distillation for Disease Classification in Chest
X-Rays [102.04931207504173]
We propose itvariational knowledge distillation (VKD), which is a new probabilistic inference framework for disease classification based on X-rays.
We demonstrate the effectiveness of our method on three public benchmark datasets with paired X-ray images and EHRs.
arXiv Detail & Related papers (2021-03-19T14:13:56Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - Explaining Clinical Decision Support Systems in Medical Imaging using
Cycle-Consistent Activation Maximization [112.2628296775395]
Clinical decision support using deep neural networks has become a topic of steadily growing interest.
clinicians are often hesitant to adopt the technology because its underlying decision-making process is considered to be intransparent and difficult to comprehend.
We propose a novel decision explanation scheme based on CycleGAN activation which generates high-quality visualizations of classifier decisions even in smaller data sets.
arXiv Detail & Related papers (2020-10-09T14:39:27Z) - On Interpretability of Deep Learning based Skin Lesion Classifiers using
Concept Activation Vectors [6.188009802619095]
We use a well-trained and high performing neural network for classification of three skin tumours, i.e. Melanocytic Naevi, Melanoma and Seborrheic Keratosis.
Human understandable concepts are mapped to RECOD image classification model with the help of Concept Activation Vectors (CAVs)
arXiv Detail & Related papers (2020-05-05T08:27:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.