Interpretable Vertebral Fracture Diagnosis
- URL: http://arxiv.org/abs/2203.16273v1
- Date: Wed, 30 Mar 2022 13:07:41 GMT
- Title: Interpretable Vertebral Fracture Diagnosis
- Authors: Paul Engstler, Matthias Keicher, David Schinz, Kristina Mach,
Alexandra S. Gersing, Sarah C. Foreman, Sophia S. Goller, Juergen Weissinger,
Jon Rischewski, Anna-Sophia Dietrich, Benedikt Wiestler, Jan S. Kirschke,
Ashkan Khakzar, Nassir Navab
- Abstract summary: Black-box neural network models learn clinically relevant features for fracture diagnosis.
This work identifies the concepts networks use for vertebral fracture diagnosis in CT images.
- Score: 69.68641439851777
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Do black-box neural network models learn clinically relevant features for
fracture diagnosis? The answer not only establishes reliability quenches
scientific curiosity but also leads to explainable and verbose findings that
can assist the radiologists in the final and increase trust. This work
identifies the concepts networks use for vertebral fracture diagnosis in CT
images. This is achieved by associating concepts to neurons highly correlated
with a specific diagnosis in the dataset. The concepts are either associated
with neurons by radiologists pre-hoc or are visualized during a specific
prediction and left for the user's interpretation. We evaluate which concepts
lead to correct diagnosis and which concepts lead to false positives. The
proposed frameworks and analysis pave the way for reliable and explainable
vertebral fracture diagnosis.
Related papers
- Mining Gaze for Contrastive Learning toward Computer-Assisted Diagnosis [61.089776864520594]
We propose eye-tracking as an alternative to text reports for medical images.
By tracking the gaze of radiologists as they read and diagnose medical images, we can understand their visual attention and clinical reasoning.
We introduce the Medical contrastive Gaze Image Pre-training (McGIP) as a plug-and-play module for contrastive learning frameworks.
arXiv Detail & Related papers (2023-12-11T02:27:45Z) - Xplainer: From X-Ray Observations to Explainable Zero-Shot Diagnosis [36.45569352490318]
We introduce Xplainer, a framework for explainable zero-shot diagnosis in the clinical setting.
Xplainer adapts the classification-by-description approach of contrastive vision-language models to the multi-label medical diagnosis task.
Our results suggest that Xplainer provides a more detailed understanding of the decision-making process.
arXiv Detail & Related papers (2023-03-23T16:07:31Z) - This Patient Looks Like That Patient: Prototypical Networks for
Interpretable Diagnosis Prediction from Clinical Text [56.32427751440426]
In clinical practice such models must not only be accurate, but provide doctors with interpretable and helpful results.
We introduce ProtoPatient, a novel method based on prototypical networks and label-wise attention.
We evaluate the model on two publicly available clinical datasets and show that it outperforms existing baselines.
arXiv Detail & Related papers (2022-10-16T10:12:07Z) - Automatic Infectious Disease Classification Analysis with Concept
Discovery [9.606677000204831]
We argue that automatic discovery of concepts, i.e., human interpretable attributes, allows for a deep understanding of learned information in medical image analysis tasks.
We provide an overview of existing concept discovery approaches in medical image and computer vision communities.
We propose NMFx, a general NMF formulation of interpretability by concept discovery that works in a unified way in unsupervised, weakly supervised, and supervised scenarios.
arXiv Detail & Related papers (2022-08-28T05:33:44Z) - BI-RADS-Net: An Explainable Multitask Learning Approach for Cancer
Diagnosis in Breast Ultrasound Images [69.41441138140895]
This paper introduces BI-RADS-Net, a novel explainable deep learning approach for cancer detection in breast ultrasound images.
The proposed approach incorporates tasks for explaining and classifying breast tumors, by learning feature representations relevant to clinical diagnosis.
Explanations of the predictions (benign or malignant) are provided in terms of morphological features that are used by clinicians for diagnosis and reporting in medical practice.
arXiv Detail & Related papers (2021-10-05T19:14:46Z) - Using Causal Analysis for Conceptual Deep Learning Explanation [11.552000005640203]
An ideal explanation resembles the decision-making process of a domain expert.
We take advantage of radiology reports accompanying chest X-ray images to define concepts.
We construct a low-depth decision tree to translate all the discovered concepts into a straightforward decision rule.
arXiv Detail & Related papers (2021-07-10T00:01:45Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - Inheritance-guided Hierarchical Assignment for Clinical Automatic
Diagnosis [50.15205065710629]
Clinical diagnosis, which aims to assign diagnosis codes for a patient based on the clinical note, plays an essential role in clinical decision-making.
We propose a novel framework to combine the inheritance-guided hierarchical assignment and co-occurrence graph propagation for clinical automatic diagnosis.
arXiv Detail & Related papers (2021-01-27T13:16:51Z) - Uncertainty aware and explainable diagnosis of retinal disease [0.0]
We perform uncertainty analysis of a deep learning model for diagnosis of four retinal diseases.
We show the features that a system used to make prediction while uncertainty awareness is the ability of a system to highlight when it is not sure about the decision.
arXiv Detail & Related papers (2021-01-26T23:37:30Z) - Constructing and Evaluating an Explainable Model for COVID-19 Diagnosis
from Chest X-rays [15.664919899567288]
We focus on constructing models to assist a clinician in the diagnosis of COVID-19 patients in situations where it is easier and cheaper to obtain X-ray data than to obtain high-quality images like those from CT scans.
Deep neural networks have repeatedly been shown to be capable of constructing highly predictive models for disease detection directly from image data.
arXiv Detail & Related papers (2020-12-19T21:33:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.