Improving Disease Classification Performance and Explainability of Deep
Learning Models in Radiology with Heatmap Generators
- URL: http://arxiv.org/abs/2207.00157v1
- Date: Tue, 28 Jun 2022 13:03:50 GMT
- Title: Improving Disease Classification Performance and Explainability of Deep
Learning Models in Radiology with Heatmap Generators
- Authors: Akino Watanabe, Sara Ketabi, Khashayar (Ernest) Namdar, and Farzad
Khalvati
- Abstract summary: Three experiment sets were conducted with a U-Net architecture to improve the classification performance.
The greatest improvements were for the "pneumonia" and "CHF" classes, which the baseline model struggled most to classify.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As deep learning is widely used in the radiology field, the explainability of
such models is increasingly becoming essential to gain clinicians' trust when
using the models for diagnosis. In this research, three experiment sets were
conducted with a U-Net architecture to improve the classification performance
while enhancing the heatmaps corresponding to the model's focus through
incorporating heatmap generators during training. All of the experiments used
the dataset that contained chest radiographs, associated labels from one of the
three conditions ("normal", "congestive heart failure (CHF)", and "pneumonia"),
and numerical information regarding a radiologist's eye-gaze coordinates on the
images. The paper (A. Karargyris and Moradi, 2021) that introduced this dataset
developed a U-Net model, which was treated as the baseline model for this
research, to show how the eye-gaze data can be used in multi-modal training for
explainability improvement. To compare the classification performances, the 95%
confidence intervals (CI) of the area under the receiver operating
characteristic curve (AUC) were measured. The best method achieved an AUC of
0.913 (CI: 0.860-0.966). The greatest improvements were for the "pneumonia" and
"CHF" classes, which the baseline model struggled most to classify, resulting
in AUCs of 0.859 (CI: 0.732-0.957) and 0.962 (CI: 0.933-0.989), respectively.
The proposed method's decoder was also able to produce probability masks that
highlight the determining image parts in model classifications, similarly as
the radiologist's eye-gaze data. Hence, this work showed that incorporating
heatmap generators and eye-gaze information into training can simultaneously
improve disease classification and provide explainable visuals that align well
with how the radiologist viewed the chest radiographs when making diagnosis.
Related papers
- Peritumoral Expansion Radiomics for Improved Lung Cancer Classification [0.0]
This study investigated how nodule segmentation and surrounding peritumoral regions influence radionics-based lung cancer classification.
Inclusion of peritumoral regions significantly enhanced performance, with the best result obtained at 8 mm expansion.
Our radiomics-based approach demonstrated superior classification accuracy.
arXiv Detail & Related papers (2024-11-24T23:04:45Z) - Attention-based Saliency Maps Improve Interpretability of Pneumothorax
Classification [52.77024349608834]
To investigate chest radiograph (CXR) classification performance of vision transformers (ViT) and interpretability of attention-based saliency.
ViTs were fine-tuned for lung disease classification using four public data sets: CheXpert, Chest X-Ray 14, MIMIC CXR, and VinBigData.
ViTs had comparable CXR classification AUCs compared with state-of-the-art CNNs.
arXiv Detail & Related papers (2023-03-03T12:05:41Z) - Learning to diagnose common thorax diseases on chest radiographs from
radiology reports in Vietnamese [0.33598755777055367]
We propose a data collecting and annotation pipeline that extracts information from Vietnamese radiology reports to provide accurate labels for chest X-ray (CXR) images.
This can benefit Vietnamese radiologists and clinicians by annotating data that closely match their endemic diagnosis categories which may vary from country to country.
arXiv Detail & Related papers (2022-09-11T06:06:03Z) - Preservation of High Frequency Content for Deep Learning-Based Medical
Image Classification [74.84221280249876]
An efficient analysis of large amounts of chest radiographs can aid physicians and radiologists.
We propose a novel Discrete Wavelet Transform (DWT)-based method for the efficient identification and encoding of visual information.
arXiv Detail & Related papers (2022-05-08T15:29:54Z) - Improving Classification Model Performance on Chest X-Rays through Lung
Segmentation [63.45024974079371]
We propose a deep learning approach to enhance abnormal chest x-ray (CXR) identification performance through segmentations.
Our approach is designed in a cascaded manner and incorporates two modules: a deep neural network with criss-cross attention modules (XLSor) for localizing lung region in CXR images and a CXR classification model with a backbone of a self-supervised momentum contrast (MoCo) model pre-trained on large-scale CXR data sets.
arXiv Detail & Related papers (2022-02-22T15:24:06Z) - Vision Transformers for femur fracture classification [59.99241204074268]
The Vision Transformer (ViT) was able to correctly predict 83% of the test images.
Good results were obtained in sub-fractures with the largest and richest dataset ever.
arXiv Detail & Related papers (2021-08-07T10:12:42Z) - Variational Knowledge Distillation for Disease Classification in Chest
X-Rays [102.04931207504173]
We propose itvariational knowledge distillation (VKD), which is a new probabilistic inference framework for disease classification based on X-rays.
We demonstrate the effectiveness of our method on three public benchmark datasets with paired X-ray images and EHRs.
arXiv Detail & Related papers (2021-03-19T14:13:56Z) - Deep Learning to Quantify Pulmonary Edema in Chest Radiographs [7.121765928263759]
We developed a machine learning model to classify the severity grades of pulmonary edema on chest radiographs.
Deep learning models were trained on a large chest radiograph dataset.
arXiv Detail & Related papers (2020-08-13T15:45:44Z) - Exploration of Interpretability Techniques for Deep COVID-19
Classification using Chest X-ray Images [10.01138352319106]
Five different deep learning models (ResNet18, ResNet34, InceptionV3, InceptionResNetV2, and DenseNet161) and their Ensemble have been used in this paper to classify COVID-19, pneumoniae and healthy subjects using Chest X-Ray images.
The mean Micro-F1 score of the models for COVID-19 classifications ranges from 0.66 to 0.875, and is 0.89 for the Ensemble of the network models.
arXiv Detail & Related papers (2020-06-03T22:55:53Z) - Machine-Learning-Based Multiple Abnormality Prediction with Large-Scale
Chest Computed Tomography Volumes [64.21642241351857]
We curated and analyzed a chest computed tomography (CT) data set of 36,316 volumes from 19,993 unique patients.
We developed a rule-based method for automatically extracting abnormality labels from free-text radiology reports.
We also developed a model for multi-organ, multi-disease classification of chest CT volumes.
arXiv Detail & Related papers (2020-02-12T00:59:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.