A Cognitive Explainer for Fetal ultrasound images classifier Based on
Medical Concepts
- URL: http://arxiv.org/abs/2201.07798v3
- Date: Tue, 18 Apr 2023 03:00:29 GMT
- Title: A Cognitive Explainer for Fetal ultrasound images classifier Based on
Medical Concepts
- Authors: Yingni Wanga, Yunxiao Liua, Licong Dongc, Xuzhou Wua, Huabin Zhangb,
Qiongyu Yed, Desheng Sunc, Xiaobo Zhoue, Kehong Yuan
- Abstract summary: We propose an interpretable framework based on key medical concepts.
We utilize a concept-based graph convolutional neural(GCN) network to construct the relationships between key medical concepts.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Fetal standard scan plane detection during 2-D mid-pregnancy examinations is
a highly complex task, which requires extensive medical knowledge and years of
training. Although deep neural networks (DNN) can assist inexperienced
operators in these tasks, their lack of transparency and interpretability limit
their application. Despite some researchers have been committed to visualizing
the decision process of DNN, most of them only focus on the pixel-level
features and do not take into account the medical prior knowledge. In this
work, we propose an interpretable framework based on key medical concepts,
which provides explanations from the perspective of clinicians' cognition.
Moreover, we utilize a concept-based graph convolutional neural(GCN) network to
construct the relationships between key medical concepts. Extensive
experimental analysis on a private dataset has shown that the proposed method
provides easy-to-understand insights about reasoning results for clinicians.
Related papers
- LCE: A Framework for Explainability of DNNs for Ultrasound Image Based on Concept Discovery [5.236608333075716]
We propose the Lesion Concept Explainer (LCE) framework, which combines attribution methods with concept-based methods.
The proposed framework is evaluated in terms of both faithfulness and understandability.
Our evaluation of public and private breast ultrasound datasets shows that LCE performs well compared to commonly-used explainability methods.
arXiv Detail & Related papers (2024-08-19T11:13:49Z) - Influence based explainability of brain tumors segmentation in multimodal Magnetic Resonance Imaging [3.1994667952195273]
We focus on the segmentation of medical images task, where most explainability methods proposed so far provide a visual explanation in terms of an input saliency map.
The aim of this work is to extend, implement and test instead an influence-based explainability algorithm, TracIn, proposed originally for classification tasks.
arXiv Detail & Related papers (2024-04-05T17:07:21Z) - Robust and Interpretable Medical Image Classifiers via Concept
Bottleneck Models [49.95603725998561]
We propose a new paradigm to build robust and interpretable medical image classifiers with natural language concepts.
Specifically, we first query clinical concepts from GPT-4, then transform latent image features into explicit concepts with a vision-language model.
arXiv Detail & Related papers (2023-10-04T21:57:09Z) - NeuroExplainer: Fine-Grained Attention Decoding to Uncover Cortical
Development Patterns of Preterm Infants [73.85768093666582]
We propose an explainable geometric deep network dubbed NeuroExplainer.
NeuroExplainer is used to uncover altered infant cortical development patterns associated with preterm birth.
arXiv Detail & Related papers (2023-01-01T12:48:12Z) - Interpretable Vertebral Fracture Diagnosis [69.68641439851777]
Black-box neural network models learn clinically relevant features for fracture diagnosis.
This work identifies the concepts networks use for vertebral fracture diagnosis in CT images.
arXiv Detail & Related papers (2022-03-30T13:07:41Z) - Leveraging Human Selective Attention for Medical Image Analysis with
Limited Training Data [72.1187887376849]
The selective attention mechanism helps the cognition system focus on task-relevant visual clues by ignoring the presence of distractors.
We propose a framework to leverage gaze for medical image analysis tasks with small training data.
Our method is demonstrated to achieve superior performance on both 3D tumor segmentation and 2D chest X-ray classification tasks.
arXiv Detail & Related papers (2021-12-02T07:55:25Z) - Transparency of Deep Neural Networks for Medical Image Analysis: A
Review of Interpretability Methods [3.3918638314432936]
Deep neural networks have shown same or better performance than clinicians in many tasks.
Current deep neural solutions are referred to as black-boxes due to a lack of understanding of the specifics concerning the decision making process.
There is a need to ensure interpretability of deep neural networks before they can be incorporated in the routine clinical workflow.
arXiv Detail & Related papers (2021-11-01T01:42:26Z) - Explaining Clinical Decision Support Systems in Medical Imaging using
Cycle-Consistent Activation Maximization [112.2628296775395]
Clinical decision support using deep neural networks has become a topic of steadily growing interest.
clinicians are often hesitant to adopt the technology because its underlying decision-making process is considered to be intransparent and difficult to comprehend.
We propose a novel decision explanation scheme based on CycleGAN activation which generates high-quality visualizations of classifier decisions even in smaller data sets.
arXiv Detail & Related papers (2020-10-09T14:39:27Z) - Domain Generalization for Medical Imaging Classification with
Linear-Dependency Regularization [59.5104563755095]
We introduce a simple but effective approach to improve the generalization capability of deep neural networks in the field of medical imaging classification.
Motivated by the observation that the domain variability of the medical images is to some extent compact, we propose to learn a representative feature space through variational encoding.
arXiv Detail & Related papers (2020-09-27T12:30:30Z) - On Interpretability of Deep Learning based Skin Lesion Classifiers using
Concept Activation Vectors [6.188009802619095]
We use a well-trained and high performing neural network for classification of three skin tumours, i.e. Melanocytic Naevi, Melanoma and Seborrheic Keratosis.
Human understandable concepts are mapped to RECOD image classification model with the help of Concept Activation Vectors (CAVs)
arXiv Detail & Related papers (2020-05-05T08:27:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.