Attention-based Saliency Maps Improve Interpretability of Pneumothorax
Classification
- URL: http://arxiv.org/abs/2303.01871v1
- Date: Fri, 3 Mar 2023 12:05:41 GMT
- Title: Attention-based Saliency Maps Improve Interpretability of Pneumothorax
Classification
- Authors: Alessandro Wollek, Robert Graf, Sa\v{s}a \v{C}e\v{c}atka, Nicola Fink,
Theresa Willem, Bastian O. Sabel, Tobias Lasser
- Abstract summary: To investigate chest radiograph (CXR) classification performance of vision transformers (ViT) and interpretability of attention-based saliency.
ViTs were fine-tuned for lung disease classification using four public data sets: CheXpert, Chest X-Ray 14, MIMIC CXR, and VinBigData.
ViTs had comparable CXR classification AUCs compared with state-of-the-art CNNs.
- Score: 52.77024349608834
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Purpose: To investigate chest radiograph (CXR) classification performance of
vision transformers (ViT) and interpretability of attention-based saliency
using the example of pneumothorax classification.
Materials and Methods: In this retrospective study, ViTs were fine-tuned for
lung disease classification using four public data sets: CheXpert, Chest X-Ray
14, MIMIC CXR, and VinBigData. Saliency maps were generated using transformer
multimodal explainability and gradient-weighted class activation mapping
(GradCAM). Classification performance was evaluated on the Chest X-Ray 14,
VinBigData, and SIIM-ACR data sets using the area under the receiver operating
characteristic curve analysis (AUC) and compared with convolutional neural
networks (CNNs). The explainability methods were evaluated with
positive/negative perturbation, sensitivity-n, effective heat ratio,
intra-architecture repeatability and interarchitecture reproducibility. In the
user study, three radiologists classified 160 CXRs with/without saliency maps
for pneumothorax and rated their usefulness.
Results: ViTs had comparable CXR classification AUCs compared with
state-of-the-art CNNs 0.95 (95% CI: 0.943, 0.950) versus 0.83 (95%, CI 0.826,
0.842) on Chest X-Ray 14, 0.84 (95% CI: 0.769, 0.912) versus 0.83 (95% CI:
0.760, 0.895) on VinBigData, and 0.85 (95% CI: 0.847, 0.861) versus 0.87 (95%
CI: 0.868, 0.882) on SIIM ACR. Both saliency map methods unveiled a strong bias
toward pneumothorax tubes in the models. Radiologists found 47% of the
attention-based saliency maps useful and 39% of GradCAM. The attention-based
methods outperformed GradCAM on all metrics.
Conclusion: ViTs performed similarly to CNNs in CXR classification, and their
attention-based saliency maps were more useful to radiologists and outperformed
GradCAM.
Related papers
- Comparison of retinal regions-of-interest imaged by OCT for the
classification of intermediate AMD [3.0171643773711208]
A total of 15744 B-scans from 269 intermediate AMD patients and 115 normal subjects were used in this study.
For each subset, a convolutional neural network (based on VGG16 architecture and pre-trained on ImageNet) was trained and tested.
The performance of the models was evaluated using the area under the receiver operating characteristic (AUROC), accuracy, sensitivity, and specificity.
arXiv Detail & Related papers (2023-05-04T13:48:55Z) - Vision Transformer for Efficient Chest X-ray and Gastrointestinal Image
Classification [2.3293678240472517]
This study uses different CNNs and transformer-based methods with a wide range of data augmentation techniques.
We evaluated their performance on three medical image datasets from different modalities.
arXiv Detail & Related papers (2023-04-23T04:07:03Z) - Learning to diagnose common thorax diseases on chest radiographs from
radiology reports in Vietnamese [0.33598755777055367]
We propose a data collecting and annotation pipeline that extracts information from Vietnamese radiology reports to provide accurate labels for chest X-ray (CXR) images.
This can benefit Vietnamese radiologists and clinicians by annotating data that closely match their endemic diagnosis categories which may vary from country to country.
arXiv Detail & Related papers (2022-09-11T06:06:03Z) - Improving Disease Classification Performance and Explainability of Deep
Learning Models in Radiology with Heatmap Generators [0.0]
Three experiment sets were conducted with a U-Net architecture to improve the classification performance.
The greatest improvements were for the "pneumonia" and "CHF" classes, which the baseline model struggled most to classify.
arXiv Detail & Related papers (2022-06-28T13:03:50Z) - A Deep Learning Based Workflow for Detection of Lung Nodules With Chest
Radiograph [0.0]
We built a segmentation model to identify lung areas from CXRs, and sliced them into 16 patches.
These labeled patches were then used to train finetune a deep neural network(DNN) model, classifying the patches as positive or negative.
arXiv Detail & Related papers (2021-12-19T16:19:46Z) - The Report on China-Spain Joint Clinical Testing for Rapid COVID-19 Risk
Screening by Eye-region Manifestations [59.48245489413308]
We developed and tested a COVID-19 rapid prescreening model using the eye-region images captured in China and Spain with cellphone cameras.
The performance was measured using area under receiver-operating-characteristic curve (AUC), sensitivity, specificity, accuracy, and F1.
arXiv Detail & Related papers (2021-09-18T02:28:01Z) - Vision Transformers for femur fracture classification [59.99241204074268]
The Vision Transformer (ViT) was able to correctly predict 83% of the test images.
Good results were obtained in sub-fractures with the largest and richest dataset ever.
arXiv Detail & Related papers (2021-08-07T10:12:42Z) - Chest x-ray automated triage: a semiologic approach designed for
clinical implementation, exploiting different types of labels through a
combination of four Deep Learning architectures [83.48996461770017]
This work presents a Deep Learning method based on the late fusion of different convolutional architectures.
We built four training datasets combining images from public chest x-ray datasets and our institutional archive.
We trained four different Deep Learning architectures and combined their outputs with a late fusion strategy, obtaining a unified tool.
arXiv Detail & Related papers (2020-12-23T14:38:35Z) - Classification of COVID-19 in CT Scans using Multi-Source Transfer
Learning [91.3755431537592]
We propose the use of Multi-Source Transfer Learning to improve upon traditional Transfer Learning for the classification of COVID-19 from CT scans.
With our multi-source fine-tuning approach, our models outperformed baseline models fine-tuned with ImageNet.
Our best performing model was able to achieve an accuracy of 0.893 and a Recall score of 0.897, outperforming its baseline Recall score by 9.3%.
arXiv Detail & Related papers (2020-09-22T11:53:06Z) - Predicting COVID-19 Pneumonia Severity on Chest X-ray with Deep Learning [57.00601760750389]
We present a severity score prediction model for COVID-19 pneumonia for frontal chest X-ray images.
Such a tool can gauge severity of COVID-19 lung infections that can be used for escalation or de-escalation of care.
arXiv Detail & Related papers (2020-05-24T23:13:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.