XProtoNet: Diagnosis in Chest Radiography with Global and Local
Explanations
- URL: http://arxiv.org/abs/2103.10663v1
- Date: Fri, 19 Mar 2021 07:18:21 GMT
- Title: XProtoNet: Diagnosis in Chest Radiography with Global and Local
Explanations
- Authors: Eunji Kim, Siwon Kim, Minji Seo, Sungroh Yoon
- Abstract summary: We present XProtoNet, a globally and locally interpretable diagnosis framework for chest radiography.
XProtoNet learns representative patterns of each disease from X-ray images, which are prototypes, and makes a diagnosis on a given X-ray image.
It can provide a global explanation, the prototype, and a local explanation, how the prototype contributes to the prediction of a single image.
- Score: 19.71623263373982
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Automated diagnosis using deep neural networks in chest radiography can help
radiologists detect life-threatening diseases. However, existing methods only
provide predictions without accurate explanations, undermining the
trustworthiness of the diagnostic methods. Here, we present XProtoNet, a
globally and locally interpretable diagnosis framework for chest radiography.
XProtoNet learns representative patterns of each disease from X-ray images,
which are prototypes, and makes a diagnosis on a given X-ray image based on the
patterns. It predicts the area where a sign of the disease is likely to appear
and compares the features in the predicted area with the prototypes. It can
provide a global explanation, the prototype, and a local explanation, how the
prototype contributes to the prediction of a single image. Despite the
constraint for interpretability, XProtoNet achieves state-of-the-art
classification performance on the public NIH chest X-ray dataset.
Related papers
- Position-Guided Prompt Learning for Anomaly Detection in Chest X-Rays [46.78926066405227]
Anomaly detection in chest X-rays is a critical task.
Recently, CLIP-based methods, pre-trained on a large number of medical images, have shown impressive performance on zero/few-shot downstream tasks.
We propose a position-guided prompt learning method to adapt the task data to the frozen CLIP-based model.
arXiv Detail & Related papers (2024-05-20T12:11:41Z) - Act Like a Radiologist: Radiology Report Generation across Anatomical Regions [50.13206214694885]
X-RGen is a radiologist-minded report generation framework across six anatomical regions.
In X-RGen, we seek to mimic the behaviour of human radiologists, breaking them down into four principal phases.
We enhance the recognition capacity of the image encoder by analysing images and reports across various regions.
arXiv Detail & Related papers (2023-05-26T07:12:35Z) - Xplainer: From X-Ray Observations to Explainable Zero-Shot Diagnosis [36.45569352490318]
We introduce Xplainer, a framework for explainable zero-shot diagnosis in the clinical setting.
Xplainer adapts the classification-by-description approach of contrastive vision-language models to the multi-label medical diagnosis task.
Our results suggest that Xplainer provides a more detailed understanding of the decision-making process.
arXiv Detail & Related papers (2023-03-23T16:07:31Z) - Vision-Language Generative Model for View-Specific Chest X-ray Generation [18.347723213970696]
ViewXGen is designed to overcome the limitations of existing methods to generate frontal-view chest X-rays.
Our approach takes into consideration the diverse view positions found in the dataset, enabling the generation of chest X-rays with specific views.
arXiv Detail & Related papers (2023-02-23T17:13:25Z) - SQUID: Deep Feature In-Painting for Unsupervised Anomaly Detection [76.01333073259677]
We propose the use of Space-aware Memory Queues for In-painting and Detecting anomalies from radiography images (abbreviated as SQUID)
We show that SQUID can taxonomize the ingrained anatomical structures into recurrent patterns; and in the inference, it can identify anomalies (unseen/modified patterns) in the image.
arXiv Detail & Related papers (2021-11-26T13:47:34Z) - Generative Residual Attention Network for Disease Detection [51.60842580044539]
We present a novel approach for disease generation in X-rays using a conditional generative adversarial learning.
We generate a corresponding radiology image in a target domain while preserving the identity of the patient.
We then use the generated X-ray image in the target domain to augment our training to improve the detection performance.
arXiv Detail & Related papers (2021-10-25T14:15:57Z) - Contrastive Attention for Automatic Chest X-ray Report Generation [124.60087367316531]
In most cases, the normal regions dominate the entire chest X-ray image, and the corresponding descriptions of these normal regions dominate the final report.
We propose Contrastive Attention (CA) model, which compares the current input image with normal images to distill the contrastive information.
We achieve the state-of-the-art results on the two public datasets.
arXiv Detail & Related papers (2021-06-13T11:20:31Z) - Explaining COVID-19 and Thoracic Pathology Model Predictions by
Identifying Informative Input Features [47.45835732009979]
Neural networks have demonstrated remarkable performance in classification and regression tasks on chest X-rays.
Features attribution methods identify the importance of input features for the output prediction.
We evaluate our methods using both human-centric (ground-truth-based) interpretability metrics, and human-independent feature importance metrics on NIH Chest X-ray8 and BrixIA datasets.
arXiv Detail & Related papers (2021-04-01T11:42:39Z) - Learning Invariant Feature Representation to Improve Generalization
across Chest X-ray Datasets [55.06983249986729]
We show that a deep learning model performing well when tested on the same dataset as training data starts to perform poorly when it is tested on a dataset from a different source.
By employing an adversarial training strategy, we show that a network can be forced to learn a source-invariant representation.
arXiv Detail & Related papers (2020-08-04T07:41:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.