Constructing and Evaluating an Explainable Model for COVID-19 Diagnosis
from Chest X-rays
- URL: http://arxiv.org/abs/2012.10787v2
- Date: Fri, 12 Feb 2021 12:30:32 GMT
- Title: Constructing and Evaluating an Explainable Model for COVID-19 Diagnosis
from Chest X-rays
- Authors: Rishab Khincha, Soundarya Krishnan, Tirtharaj Dash, Lovekesh Vig and
Ashwin Srinivasan
- Abstract summary: We focus on constructing models to assist a clinician in the diagnosis of COVID-19 patients in situations where it is easier and cheaper to obtain X-ray data than to obtain high-quality images like those from CT scans.
Deep neural networks have repeatedly been shown to be capable of constructing highly predictive models for disease detection directly from image data.
- Score: 15.664919899567288
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, our focus is on constructing models to assist a clinician in
the diagnosis of COVID-19 patients in situations where it is easier and cheaper
to obtain X-ray data than to obtain high-quality images like those from CT
scans. Deep neural networks have repeatedly been shown to be capable of
constructing highly predictive models for disease detection directly from image
data. However, their use in assisting clinicians has repeatedly hit a stumbling
block due to their black-box nature. Some of this difficulty can be alleviated
if predictions were accompanied by explanations expressed in clinically
relevant terms. In this paper, deep neural networks are used to extract
domain-specific features(morphological features like ground-glass opacity and
disease indications like pneumonia) directly from the image data. Predictions
about these features are then used to construct a symbolic model (a decision
tree) for the diagnosis of COVID-19 from chest X-rays, accompanied with two
kinds of explanations: visual (saliency maps, derived from the neural stage),
and textual (logical descriptions, derived from the symbolic stage). A
radiologist rates the usefulness of the visual and textual explanations. Our
results demonstrate that neural models can be employed usefully in identifying
domain-specific features from low-level image data; that textual explanations
in terms of clinically relevant features may be useful; and that visual
explanations will need to be clinically meaningful to be useful.
Related papers
- Interpretable Vertebral Fracture Diagnosis [69.68641439851777]
Black-box neural network models learn clinically relevant features for fracture diagnosis.
This work identifies the concepts networks use for vertebral fracture diagnosis in CT images.
arXiv Detail & Related papers (2022-03-30T13:07:41Z) - Feature visualization for convolutional neural network models trained on
neuroimaging data [0.0]
We show for the first time results using feature visualization of convolutional neural networks (CNNs)
We have trained CNNs for different tasks including sex classification and artificial lesion classification based on structural magnetic resonance imaging (MRI) data.
The resulting images reveal the learned concepts of the artificial lesions, including their shapes, but remain hard to interpret for abstract features in the sex classification task.
arXiv Detail & Related papers (2022-03-24T15:24:38Z) - SQUID: Deep Feature In-Painting for Unsupervised Anomaly Detection [76.01333073259677]
We propose the use of Space-aware Memory Queues for In-painting and Detecting anomalies from radiography images (abbreviated as SQUID)
We show that SQUID can taxonomize the ingrained anatomical structures into recurrent patterns; and in the inference, it can identify anomalies (unseen/modified patterns) in the image.
arXiv Detail & Related papers (2021-11-26T13:47:34Z) - Generative Residual Attention Network for Disease Detection [51.60842580044539]
We present a novel approach for disease generation in X-rays using a conditional generative adversarial learning.
We generate a corresponding radiology image in a target domain while preserving the identity of the patient.
We then use the generated X-ray image in the target domain to augment our training to improve the detection performance.
arXiv Detail & Related papers (2021-10-25T14:15:57Z) - BI-RADS-Net: An Explainable Multitask Learning Approach for Cancer
Diagnosis in Breast Ultrasound Images [69.41441138140895]
This paper introduces BI-RADS-Net, a novel explainable deep learning approach for cancer detection in breast ultrasound images.
The proposed approach incorporates tasks for explaining and classifying breast tumors, by learning feature representations relevant to clinical diagnosis.
Explanations of the predictions (benign or malignant) are provided in terms of morphological features that are used by clinicians for diagnosis and reporting in medical practice.
arXiv Detail & Related papers (2021-10-05T19:14:46Z) - Interpretable Mammographic Image Classification using Cased-Based
Reasoning and Deep Learning [20.665935997959025]
We present a novel interpretable neural network algorithm that uses case-based reasoning for mammography.
Our network presents both a prediction of malignancy and an explanation of that prediction using known medical features.
arXiv Detail & Related papers (2021-07-12T17:42:09Z) - Contrastive Attention for Automatic Chest X-ray Report Generation [124.60087367316531]
In most cases, the normal regions dominate the entire chest X-ray image, and the corresponding descriptions of these normal regions dominate the final report.
We propose Contrastive Attention (CA) model, which compares the current input image with normal images to distill the contrastive information.
We achieve the state-of-the-art results on the two public datasets.
arXiv Detail & Related papers (2021-06-13T11:20:31Z) - Covid-19 Detection from Chest X-ray and Patient Metadata using Graph
Convolutional Neural Networks [6.420262246029286]
We propose a novel Graph Convolution Neural Network (GCN) that is capable of identifying bio-markers of Covid-19 pneumonia.
The proposed method exploits important relational knowledge between data instances and their features using graph representation and applies convolution to learn the graph data.
arXiv Detail & Related papers (2021-05-20T13:13:29Z) - Towards Semantic Interpretation of Thoracic Disease and COVID-19
Diagnosis Models [38.64779427647742]
Convolutional neural networks are showing promise in the automatic diagnosis of thoracic pathologies on chest x-rays.
In this work, we first identify the semantics associated with internal units (feature maps) of the network.
We investigate the effect of pretraining and data imbalance on the interpretability of learned features.
arXiv Detail & Related papers (2021-04-04T17:35:13Z) - Variational Knowledge Distillation for Disease Classification in Chest
X-Rays [102.04931207504173]
We propose itvariational knowledge distillation (VKD), which is a new probabilistic inference framework for disease classification based on X-rays.
We demonstrate the effectiveness of our method on three public benchmark datasets with paired X-ray images and EHRs.
arXiv Detail & Related papers (2021-03-19T14:13:56Z) - Potential Features of ICU Admission in X-ray Images of COVID-19 Patients [8.83608410540057]
This paper presents an original methodology for extracting semantic features that correlate to severity from a data set with patient ICU admission labels.
The methodology employs a neural network trained to recognise lung pathologies to extract the semantic features.
The method has shown to be capable of selecting images for the learned features, which could translate some information about their common locations in the lung.
arXiv Detail & Related papers (2020-09-26T13:48:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.