LCE: A Framework for Explainability of DNNs for Ultrasound Image Based on Concept Discovery
- URL: http://arxiv.org/abs/2408.09899v1
- Date: Mon, 19 Aug 2024 11:13:49 GMT
- Title: LCE: A Framework for Explainability of DNNs for Ultrasound Image Based on Concept Discovery
- Authors: Weiji Kong, Xun Gong, Juan Wang,
- Abstract summary: We propose the Lesion Concept Explainer (LCE) framework, which combines attribution methods with concept-based methods.
The proposed framework is evaluated in terms of both faithfulness and understandability.
Our evaluation of public and private breast ultrasound datasets shows that LCE performs well compared to commonly-used explainability methods.
- Score: 5.236608333075716
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Explaining the decisions of Deep Neural Networks (DNNs) for medical images has become increasingly important. Existing attribution methods have difficulty explaining the meaning of pixels while existing concept-based methods are limited by additional annotations or specific model structures that are difficult to apply to ultrasound images. In this paper, we propose the Lesion Concept Explainer (LCE) framework, which combines attribution methods with concept-based methods. We introduce the Segment Anything Model (SAM), fine-tuned on a large number of medical images, for concept discovery to enable a meaningful explanation of ultrasound image DNNs. The proposed framework is evaluated in terms of both faithfulness and understandability. We point out deficiencies in the popular faithfulness evaluation metrics and propose a new evaluation metric. Our evaluation of public and private breast ultrasound datasets (BUSI and FG-US-B) shows that LCE performs well compared to commonly-used explainability methods. Finally, we also validate that LCE can consistently provide reliable explanations for more meaningful fine-grained diagnostic tasks in breast ultrasound.
Related papers
- MICA: Towards Explainable Skin Lesion Diagnosis via Multi-Level
Image-Concept Alignment [4.861768967055006]
We propose a multi-modal explainable disease diagnosis framework that meticulously aligns medical images and clinical-related concepts semantically at multiple strata.
Our method, while preserving model interpretability, attains high performance and label efficiency for concept detection and disease diagnosis.
arXiv Detail & Related papers (2024-01-16T17:45:01Z) - Robust and Interpretable Medical Image Classifiers via Concept
Bottleneck Models [49.95603725998561]
We propose a new paradigm to build robust and interpretable medical image classifiers with natural language concepts.
Specifically, we first query clinical concepts from GPT-4, then transform latent image features into explicit concepts with a vision-language model.
arXiv Detail & Related papers (2023-10-04T21:57:09Z) - BMAD: Benchmarks for Medical Anomaly Detection [51.22159321912891]
Anomaly detection (AD) is a fundamental research problem in machine learning and computer vision.
In medical imaging, AD is especially vital for detecting and diagnosing anomalies that may indicate rare diseases or conditions.
We introduce a comprehensive evaluation benchmark for assessing anomaly detection methods on medical images.
arXiv Detail & Related papers (2023-06-20T20:23:46Z) - Preservation of High Frequency Content for Deep Learning-Based Medical
Image Classification [74.84221280249876]
An efficient analysis of large amounts of chest radiographs can aid physicians and radiologists.
We propose a novel Discrete Wavelet Transform (DWT)-based method for the efficient identification and encoding of visual information.
arXiv Detail & Related papers (2022-05-08T15:29:54Z) - A Cognitive Explainer for Fetal ultrasound images classifier Based on
Medical Concepts [0.0]
We propose an interpretable framework based on key medical concepts.
We utilize a concept-based graph convolutional neural(GCN) network to construct the relationships between key medical concepts.
arXiv Detail & Related papers (2022-01-19T14:44:36Z) - Improving Interpretability of Deep Neural Networks in Medical Diagnosis
by Investigating the Individual Units [24.761080054980713]
We demonstrate the efficiency of recent attribution techniques to explain the diagnostic decision by visualizing the significant factors in the input image.
Our analysis of unmasking machine intelligence represents the necessity of explainability in the medical diagnostic decision.
arXiv Detail & Related papers (2021-07-19T11:49:31Z) - Act Like a Radiologist: Towards Reliable Multi-view Correspondence
Reasoning for Mammogram Mass Detection [49.14070210387509]
We propose an Anatomy-aware Graph convolutional Network (AGN) for mammogram mass detection.
AGN is tailored for mammogram mass detection and endows existing detection methods with multi-view reasoning ability.
Experiments on two standard benchmarks reveal that AGN significantly exceeds the state-of-the-art performance.
arXiv Detail & Related papers (2021-05-21T06:48:34Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - Explaining Clinical Decision Support Systems in Medical Imaging using
Cycle-Consistent Activation Maximization [112.2628296775395]
Clinical decision support using deep neural networks has become a topic of steadily growing interest.
clinicians are often hesitant to adopt the technology because its underlying decision-making process is considered to be intransparent and difficult to comprehend.
We propose a novel decision explanation scheme based on CycleGAN activation which generates high-quality visualizations of classifier decisions even in smaller data sets.
arXiv Detail & Related papers (2020-10-09T14:39:27Z) - Uncertainty Quantification using Variational Inference for Biomedical Image Segmentation [0.0]
We use an encoder decoder architecture based on variational inference techniques for segmenting brain tumour images.
We evaluate our work on the publicly available BRATS dataset using Dice Similarity Coefficient (DSC) and Intersection Over Union (IOU) as the evaluation metrics.
arXiv Detail & Related papers (2020-08-12T20:08:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.