Using Causal Analysis for Conceptual Deep Learning Explanation
- URL: http://arxiv.org/abs/2107.06098v1
- Date: Sat, 10 Jul 2021 00:01:45 GMT
- Title: Using Causal Analysis for Conceptual Deep Learning Explanation
- Authors: Sumedha Singla, Stephen Wallace, Sofia Triantafillou, Kayhan
Batmanghelich
- Abstract summary: An ideal explanation resembles the decision-making process of a domain expert.
We take advantage of radiology reports accompanying chest X-ray images to define concepts.
We construct a low-depth decision tree to translate all the discovered concepts into a straightforward decision rule.
- Score: 11.552000005640203
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Model explainability is essential for the creation of trustworthy Machine
Learning models in healthcare. An ideal explanation resembles the
decision-making process of a domain expert and is expressed using concepts or
terminology that is meaningful to the clinicians. To provide such an
explanation, we first associate the hidden units of the classifier to
clinically relevant concepts. We take advantage of radiology reports
accompanying the chest X-ray images to define concepts. We discover sparse
associations between concepts and hidden units using a linear sparse logistic
regression. To ensure that the identified units truly influence the
classifier's outcome, we adopt tools from Causal Inference literature and, more
specifically, mediation analysis through counterfactual interventions. Finally,
we construct a low-depth decision tree to translate all the discovered concepts
into a straightforward decision rule, expressed to the radiologist. We
evaluated our approach on a large chest x-ray dataset, where our model produces
a global explanation consistent with clinical knowledge.
Related papers
- Aligning Characteristic Descriptors with Images for Human-Expert-like Explainability [0.0]
In mission-critical domains such as law enforcement and medical diagnosis, the ability to explain and interpret the outputs of deep learning models is crucial.
We propose a novel approach that utilizes characteristic descriptors to explain model decisions by identifying their presence in images.
arXiv Detail & Related papers (2024-11-06T15:47:18Z) - Explaining Chest X-ray Pathology Models using Textual Concepts [9.67960010121851]
We propose Conceptual Counterfactual Explanations for Chest X-ray (CoCoX)
We leverage the joint embedding space of an existing vision-language model (VLM) to explain black-box classifier outcomes without the need for annotated datasets.
We demonstrate that the explanations generated by our method are semantically meaningful and faithful to underlying pathologies.
arXiv Detail & Related papers (2024-06-30T01:31:54Z) - Aligning Human Knowledge with Visual Concepts Towards Explainable Medical Image Classification [8.382606243533942]
We introduce a simple yet effective framework, Explicd, towards Explainable language-informed criteria-based diagnosis.
By leveraging a pretrained vision-language model, Explicd injects these criteria into the embedding space as knowledge anchors.
The final diagnostic outcome is determined based on the similarity scores between the encoded visual concepts and the textual criteria embeddings.
arXiv Detail & Related papers (2024-06-08T23:23:28Z) - Robust and Interpretable Medical Image Classifiers via Concept
Bottleneck Models [49.95603725998561]
We propose a new paradigm to build robust and interpretable medical image classifiers with natural language concepts.
Specifically, we first query clinical concepts from GPT-4, then transform latent image features into explicit concepts with a vision-language model.
arXiv Detail & Related papers (2023-10-04T21:57:09Z) - Investigating the Role of Centering Theory in the Context of Neural
Coreference Resolution Systems [71.57556446474486]
We investigate the connection between centering theory and modern coreference resolution systems.
We show that high-quality neural coreference resolvers may not benefit much from explicitly modeling centering ideas.
We formulate a version of CT that also models recency and show that it captures coreference information better compared to vanilla CT.
arXiv Detail & Related papers (2022-10-26T12:55:26Z) - Interpretable Vertebral Fracture Diagnosis [69.68641439851777]
Black-box neural network models learn clinically relevant features for fracture diagnosis.
This work identifies the concepts networks use for vertebral fracture diagnosis in CT images.
arXiv Detail & Related papers (2022-03-30T13:07:41Z) - ExAID: A Multimodal Explanation Framework for Computer-Aided Diagnosis
of Skin Lesions [4.886872847478552]
ExAID (Explainable AI for Dermatology) is a novel framework for biomedical image analysis.
It provides multi-modal concept-based explanations consisting of easy-to-understand textual explanations.
It will be the basis for similar applications in other biomedical imaging fields.
arXiv Detail & Related papers (2022-01-04T17:11:28Z) - BI-RADS-Net: An Explainable Multitask Learning Approach for Cancer
Diagnosis in Breast Ultrasound Images [69.41441138140895]
This paper introduces BI-RADS-Net, a novel explainable deep learning approach for cancer detection in breast ultrasound images.
The proposed approach incorporates tasks for explaining and classifying breast tumors, by learning feature representations relevant to clinical diagnosis.
Explanations of the predictions (benign or malignant) are provided in terms of morphological features that are used by clinicians for diagnosis and reporting in medical practice.
arXiv Detail & Related papers (2021-10-05T19:14:46Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - Explaining Clinical Decision Support Systems in Medical Imaging using
Cycle-Consistent Activation Maximization [112.2628296775395]
Clinical decision support using deep neural networks has become a topic of steadily growing interest.
clinicians are often hesitant to adopt the technology because its underlying decision-making process is considered to be intransparent and difficult to comprehend.
We propose a novel decision explanation scheme based on CycleGAN activation which generates high-quality visualizations of classifier decisions even in smaller data sets.
arXiv Detail & Related papers (2020-10-09T14:39:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.