Explaining COVID-19 and Thoracic Pathology Model Predictions by
Identifying Informative Input Features
- URL: http://arxiv.org/abs/2104.00411v1
- Date: Thu, 1 Apr 2021 11:42:39 GMT
- Title: Explaining COVID-19 and Thoracic Pathology Model Predictions by
Identifying Informative Input Features
- Authors: Ashkan Khakzar, Yang Zhang, Wejdene Mansour, Yuezhi Cai, Yawei Li,
Yucheng Zhang, Seong Tae Kim, Nassir Navab
- Abstract summary: Neural networks have demonstrated remarkable performance in classification and regression tasks on chest X-rays.
Features attribution methods identify the importance of input features for the output prediction.
We evaluate our methods using both human-centric (ground-truth-based) interpretability metrics, and human-independent feature importance metrics on NIH Chest X-ray8 and BrixIA datasets.
- Score: 47.45835732009979
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural networks have demonstrated remarkable performance in classification
and regression tasks on chest X-rays. In order to establish trust in the
clinical routine, the networks' prediction mechanism needs to be interpretable.
One principal approach to interpretation is feature attribution. Feature
attribution methods identify the importance of input features for the output
prediction. Building on Information Bottleneck Attribution (IBA) method, for
each prediction we identify the chest X-ray regions that have high mutual
information with the network's output. Original IBA identifies input regions
that have sufficient predictive information. We propose Inverse IBA to identify
all informative regions. Thus all predictive cues for pathologies are
highlighted on the X-rays, a desirable property for chest X-ray diagnosis.
Moreover, we propose Regression IBA for explaining regression models. Using
Regression IBA we observe that a model trained on cumulative severity score
labels implicitly learns the severity of different X-ray regions. Finally, we
propose Multi-layer IBA to generate higher resolution and more detailed
attribution/saliency maps. We evaluate our methods using both human-centric
(ground-truth-based) interpretability metrics, and human-independent feature
importance metrics on NIH Chest X-ray8 and BrixIA datasets. The Code is
publicly available.
Related papers
- CXPMRG-Bench: Pre-training and Benchmarking for X-ray Medical Report Generation on CheXpert Plus Dataset [14.911363203907008]
X-ray image-based medical report generation can significantly reduce diagnostic burdens and patient wait times.
We conduct a comprehensive benchmarking of existing mainstream X-ray report generation models and large language models (LLMs) on the CheXpert Plus dataset.
We propose a large model for the X-ray image report generation using a multi-stage pre-training strategy, including self-supervised autoregressive generation and Xray-report contrastive learning.
arXiv Detail & Related papers (2024-10-01T04:07:01Z) - Position-Guided Prompt Learning for Anomaly Detection in Chest X-Rays [46.78926066405227]
Anomaly detection in chest X-rays is a critical task.
Recently, CLIP-based methods, pre-trained on a large number of medical images, have shown impressive performance on zero/few-shot downstream tasks.
We propose a position-guided prompt learning method to adapt the task data to the frozen CLIP-based model.
arXiv Detail & Related papers (2024-05-20T12:11:41Z) - Improving Classification Model Performance on Chest X-Rays through Lung
Segmentation [63.45024974079371]
We propose a deep learning approach to enhance abnormal chest x-ray (CXR) identification performance through segmentations.
Our approach is designed in a cascaded manner and incorporates two modules: a deep neural network with criss-cross attention modules (XLSor) for localizing lung region in CXR images and a CXR classification model with a backbone of a self-supervised momentum contrast (MoCo) model pre-trained on large-scale CXR data sets.
arXiv Detail & Related papers (2022-02-22T15:24:06Z) - The pitfalls of using open data to develop deep learning solutions for
COVID-19 detection in chest X-rays [64.02097860085202]
Deep learning models have been developed to identify COVID-19 from chest X-rays.
Results have been exceptional when training and testing on open-source data.
Data analysis and model evaluations show that the popular open-source dataset COVIDx is not representative of the real clinical problem.
arXiv Detail & Related papers (2021-09-14T10:59:11Z) - Towards Semantic Interpretation of Thoracic Disease and COVID-19
Diagnosis Models [38.64779427647742]
Convolutional neural networks are showing promise in the automatic diagnosis of thoracic pathologies on chest x-rays.
In this work, we first identify the semantics associated with internal units (feature maps) of the network.
We investigate the effect of pretraining and data imbalance on the interpretability of learned features.
arXiv Detail & Related papers (2021-04-04T17:35:13Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - Interpreting Uncertainty in Model Predictions For COVID-19 Diagnosis [0.0]
COVID-19 has brought in the need to use assistive tools for faster diagnosis in addition to typical lab swab testing.
Traditional convolutional networks use point estimate for predictions, lacking in capture of uncertainty.
We develop a visualization framework to address interpretability of uncertainty and its components, with uncertainty in predictions computed with a Bayesian Convolutional Neural Network.
arXiv Detail & Related papers (2020-10-26T01:27:29Z) - Learning Invariant Feature Representation to Improve Generalization
across Chest X-ray Datasets [55.06983249986729]
We show that a deep learning model performing well when tested on the same dataset as training data starts to perform poorly when it is tested on a dataset from a different source.
By employing an adversarial training strategy, we show that a network can be forced to learn a source-invariant representation.
arXiv Detail & Related papers (2020-08-04T07:41:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.