Xplainer: From X-Ray Observations to Explainable Zero-Shot Diagnosis
- URL: http://arxiv.org/abs/2303.13391v3
- Date: Wed, 28 Jun 2023 10:26:46 GMT
- Title: Xplainer: From X-Ray Observations to Explainable Zero-Shot Diagnosis
- Authors: Chantal Pellegrini, Matthias Keicher, Ege \"Ozsoy, Petra Jiraskova,
Rickmer Braren, Nassir Navab
- Abstract summary: We introduce Xplainer, a framework for explainable zero-shot diagnosis in the clinical setting.
Xplainer adapts the classification-by-description approach of contrastive vision-language models to the multi-label medical diagnosis task.
Our results suggest that Xplainer provides a more detailed understanding of the decision-making process.
- Score: 36.45569352490318
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automated diagnosis prediction from medical images is a valuable resource to
support clinical decision-making. However, such systems usually need to be
trained on large amounts of annotated data, which often is scarce in the
medical domain. Zero-shot methods address this challenge by allowing a flexible
adaption to new settings with different clinical findings without relying on
labeled data. Further, to integrate automated diagnosis in the clinical
workflow, methods should be transparent and explainable, increasing medical
professionals' trust and facilitating correctness verification. In this work,
we introduce Xplainer, a novel framework for explainable zero-shot diagnosis in
the clinical setting. Xplainer adapts the classification-by-description
approach of contrastive vision-language models to the multi-label medical
diagnosis task. Specifically, instead of directly predicting a diagnosis, we
prompt the model to classify the existence of descriptive observations, which a
radiologist would look for on an X-Ray scan, and use the descriptor
probabilities to estimate the likelihood of a diagnosis. Our model is
explainable by design, as the final diagnosis prediction is directly based on
the prediction of the underlying descriptors. We evaluate Xplainer on two chest
X-ray datasets, CheXpert and ChestX-ray14, and demonstrate its effectiveness in
improving the performance and explainability of zero-shot diagnosis. Our
results suggest that Xplainer provides a more detailed understanding of the
decision-making process and can be a valuable tool for clinical diagnosis.
Related papers
- MAGDA: Multi-agent guideline-driven diagnostic assistance [43.15066219293877]
In emergency departments, rural hospitals, or clinics in less developed regions, clinicians often lack fast image analysis by trained radiologists.
In this work, we introduce a new approach for zero-shot guideline-driven decision support.
We model a system of multiple LLM agents augmented with a contrastive vision-language model that collaborate to reach a patient diagnosis.
arXiv Detail & Related papers (2024-09-10T09:10:30Z) - Towards Reducing Diagnostic Errors with Interpretable Risk Prediction [18.474645862061426]
We propose a method to use LLMs to identify pieces of evidence in patient EHR data that indicate increased or decreased risk of specific diagnoses.
Our ultimate aim is to increase access to evidence and reduce diagnostic errors.
arXiv Detail & Related papers (2024-02-15T17:05:48Z) - Improving Chest X-Ray Classification by RNN-based Patient Monitoring [0.34998703934432673]
We analyze how information about diagnosis can improve CNN-based image classification models.
We show that a model trained on additional patient history information outperforms a model trained without the information by a significant margin.
arXiv Detail & Related papers (2022-10-28T11:47:15Z) - This Patient Looks Like That Patient: Prototypical Networks for
Interpretable Diagnosis Prediction from Clinical Text [56.32427751440426]
In clinical practice such models must not only be accurate, but provide doctors with interpretable and helpful results.
We introduce ProtoPatient, a novel method based on prototypical networks and label-wise attention.
We evaluate the model on two publicly available clinical datasets and show that it outperforms existing baselines.
arXiv Detail & Related papers (2022-10-16T10:12:07Z) - Interpretable Vertebral Fracture Diagnosis [69.68641439851777]
Black-box neural network models learn clinically relevant features for fracture diagnosis.
This work identifies the concepts networks use for vertebral fracture diagnosis in CT images.
arXiv Detail & Related papers (2022-03-30T13:07:41Z) - BI-RADS-Net: An Explainable Multitask Learning Approach for Cancer
Diagnosis in Breast Ultrasound Images [69.41441138140895]
This paper introduces BI-RADS-Net, a novel explainable deep learning approach for cancer detection in breast ultrasound images.
The proposed approach incorporates tasks for explaining and classifying breast tumors, by learning feature representations relevant to clinical diagnosis.
Explanations of the predictions (benign or malignant) are provided in terms of morphological features that are used by clinicians for diagnosis and reporting in medical practice.
arXiv Detail & Related papers (2021-10-05T19:14:46Z) - Variational Knowledge Distillation for Disease Classification in Chest
X-Rays [102.04931207504173]
We propose itvariational knowledge distillation (VKD), which is a new probabilistic inference framework for disease classification based on X-rays.
We demonstrate the effectiveness of our method on three public benchmark datasets with paired X-ray images and EHRs.
arXiv Detail & Related papers (2021-03-19T14:13:56Z) - XProtoNet: Diagnosis in Chest Radiography with Global and Local
Explanations [19.71623263373982]
We present XProtoNet, a globally and locally interpretable diagnosis framework for chest radiography.
XProtoNet learns representative patterns of each disease from X-ray images, which are prototypes, and makes a diagnosis on a given X-ray image.
It can provide a global explanation, the prototype, and a local explanation, how the prototype contributes to the prediction of a single image.
arXiv Detail & Related papers (2021-03-19T07:18:21Z) - Inheritance-guided Hierarchical Assignment for Clinical Automatic
Diagnosis [50.15205065710629]
Clinical diagnosis, which aims to assign diagnosis codes for a patient based on the clinical note, plays an essential role in clinical decision-making.
We propose a novel framework to combine the inheritance-guided hierarchical assignment and co-occurrence graph propagation for clinical automatic diagnosis.
arXiv Detail & Related papers (2021-01-27T13:16:51Z) - Query-Focused EHR Summarization to Aid Imaging Diagnosis [22.21438906817433]
We propose and evaluate models that extract relevant text snippets from patient records to provide a rough case summary.
We use groups of International Classification of Diseases (ICD) codes observed in 'future' records as noisy proxies for 'downstream' diagnoses.
We train (via distant supervision) and evaluate variants of this model on EHR data from Brigham and Women's Hospital in Boston and MIMIC-III.
arXiv Detail & Related papers (2020-04-09T16:32:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.