Automatic Infectious Disease Classification Analysis with Concept
Discovery
- URL: http://arxiv.org/abs/2209.02415v1
- Date: Sun, 28 Aug 2022 05:33:44 GMT
- Title: Automatic Infectious Disease Classification Analysis with Concept
Discovery
- Authors: Elena Sizikova, Joshua Vendrow, Xu Cao, Rachel Grotheer, Jamie
Haddock, Lara Kassab, Alona Kryshchenko, Thomas Merkh, R. W. M. A. Madushani,
Kenny Moise, Annie Ulichney, Huy V. Vo, Chuntian Wang, Megan Coffee, Kathryn
Leonard, Deanna Needell
- Abstract summary: We argue that automatic discovery of concepts, i.e., human interpretable attributes, allows for a deep understanding of learned information in medical image analysis tasks.
We provide an overview of existing concept discovery approaches in medical image and computer vision communities.
We propose NMFx, a general NMF formulation of interpretability by concept discovery that works in a unified way in unsupervised, weakly supervised, and supervised scenarios.
- Score: 9.606677000204831
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automatic infectious disease classification from images can facilitate needed
medical diagnoses. Such an approach can identify diseases, like tuberculosis,
which remain under-diagnosed due to resource constraints and also novel and
emerging diseases, like monkeypox, which clinicians have little experience or
acumen in diagnosing. Avoiding missed or delayed diagnoses would prevent
further transmission and improve clinical outcomes. In order to understand and
trust neural network predictions, analysis of learned representations is
necessary. In this work, we argue that automatic discovery of concepts, i.e.,
human interpretable attributes, allows for a deep understanding of learned
information in medical image analysis tasks, generalizing beyond the training
labels or protocols. We provide an overview of existing concept discovery
approaches in medical image and computer vision communities, and evaluate
representative methods on tuberculosis (TB) prediction and monkeypox prediction
tasks. Finally, we propose NMFx, a general NMF formulation of interpretability
by concept discovery that works in a unified way in unsupervised, weakly
supervised, and supervised scenarios.
Related papers
- A Two-Step Concept-Based Approach for Enhanced Interpretability and Trust in Skin Lesion Diagnosis [6.6635650150737815]
Concept Bottleneck Models (CBMs) offer inherent interpretability by constraining the final disease prediction on a set of human-understandable concepts.
We introduce a novel two-step methodology that addresses both of these challenges.
By simulating the two stages of a CBM, we utilize a pretrained Vision Language Model (VLM) to automatically predict clinical concepts, and a Large Language Model (LLM) to generate disease diagnoses.
arXiv Detail & Related papers (2024-11-08T14:52:42Z) - Decoding Decision Reasoning: A Counterfactual-Powered Model for Knowledge Discovery [6.1521675665532545]
In medical imaging, discerning the rationale behind an AI model's predictions is crucial for evaluating its reliability.
We propose an explainable model that is equipped with both decision reasoning and feature identification capabilities.
By implementing our method, we can efficiently identify and visualise class-specific features leveraged by the data-driven model.
arXiv Detail & Related papers (2024-05-23T19:00:38Z) - MICA: Towards Explainable Skin Lesion Diagnosis via Multi-Level
Image-Concept Alignment [4.861768967055006]
We propose a multi-modal explainable disease diagnosis framework that meticulously aligns medical images and clinical-related concepts semantically at multiple strata.
Our method, while preserving model interpretability, attains high performance and label efficiency for concept detection and disease diagnosis.
arXiv Detail & Related papers (2024-01-16T17:45:01Z) - Robust and Interpretable Medical Image Classifiers via Concept
Bottleneck Models [49.95603725998561]
We propose a new paradigm to build robust and interpretable medical image classifiers with natural language concepts.
Specifically, we first query clinical concepts from GPT-4, then transform latent image features into explicit concepts with a vision-language model.
arXiv Detail & Related papers (2023-10-04T21:57:09Z) - Informing clinical assessment by contextualizing post-hoc explanations
of risk prediction models in type-2 diabetes [50.8044927215346]
We consider a comorbidity risk prediction scenario and focus on contexts regarding the patients clinical state.
We employ several state-of-the-art LLMs to present contexts around risk prediction model inferences and evaluate their acceptability.
Our paper is one of the first end-to-end analyses identifying the feasibility and benefits of contextual explanations in a real-world clinical use case.
arXiv Detail & Related papers (2023-02-11T18:07:11Z) - NeuroExplainer: Fine-Grained Attention Decoding to Uncover Cortical
Development Patterns of Preterm Infants [73.85768093666582]
We propose an explainable geometric deep network dubbed NeuroExplainer.
NeuroExplainer is used to uncover altered infant cortical development patterns associated with preterm birth.
arXiv Detail & Related papers (2023-01-01T12:48:12Z) - Interpretable Vertebral Fracture Diagnosis [69.68641439851777]
Black-box neural network models learn clinically relevant features for fracture diagnosis.
This work identifies the concepts networks use for vertebral fracture diagnosis in CT images.
arXiv Detail & Related papers (2022-03-30T13:07:41Z) - ExAID: A Multimodal Explanation Framework for Computer-Aided Diagnosis
of Skin Lesions [4.886872847478552]
ExAID (Explainable AI for Dermatology) is a novel framework for biomedical image analysis.
It provides multi-modal concept-based explanations consisting of easy-to-understand textual explanations.
It will be the basis for similar applications in other biomedical imaging fields.
arXiv Detail & Related papers (2022-01-04T17:11:28Z) - Transparency of Deep Neural Networks for Medical Image Analysis: A
Review of Interpretability Methods [3.3918638314432936]
Deep neural networks have shown same or better performance than clinicians in many tasks.
Current deep neural solutions are referred to as black-boxes due to a lack of understanding of the specifics concerning the decision making process.
There is a need to ensure interpretability of deep neural networks before they can be incorporated in the routine clinical workflow.
arXiv Detail & Related papers (2021-11-01T01:42:26Z) - Clinical Outcome Prediction from Admission Notes using Self-Supervised
Knowledge Integration [55.88616573143478]
Outcome prediction from clinical text can prevent doctors from overlooking possible risks.
Diagnoses at discharge, procedures performed, in-hospital mortality and length-of-stay prediction are four common outcome prediction targets.
We propose clinical outcome pre-training to integrate knowledge about patient outcomes from multiple public sources.
arXiv Detail & Related papers (2021-02-08T10:26:44Z) - Inheritance-guided Hierarchical Assignment for Clinical Automatic
Diagnosis [50.15205065710629]
Clinical diagnosis, which aims to assign diagnosis codes for a patient based on the clinical note, plays an essential role in clinical decision-making.
We propose a novel framework to combine the inheritance-guided hierarchical assignment and co-occurrence graph propagation for clinical automatic diagnosis.
arXiv Detail & Related papers (2021-01-27T13:16:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.