Signature Activation: A Sparse Signal View for Holistic Saliency
- URL: http://arxiv.org/abs/2309.11443v1
- Date: Wed, 20 Sep 2023 16:17:26 GMT
- Title: Signature Activation: A Sparse Signal View for Holistic Saliency
- Authors: Jose Roberto Tello Ayala, Akl C. Fahed, Weiwei Pan, Eugene V.
Pomerantsev, Patrick T. Ellinor, Anthony Philippakis, Finale Doshi-Velez
- Abstract summary: We introduce Signature Activation, a saliency method that generates holistic and class-agnostic explanations for CNN outputs.
Our method exploits the fact that certain kinds of medical images, such as angiograms, have clear foreground and background objects.
We show the potential use of our method in clinical settings through evaluating its efficacy for aiding the detection of lesions in coronary angiograms.
- Score: 18.699129959911485
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The adoption of machine learning in healthcare calls for model transparency
and explainability. In this work, we introduce Signature Activation, a saliency
method that generates holistic and class-agnostic explanations for
Convolutional Neural Network (CNN) outputs. Our method exploits the fact that
certain kinds of medical images, such as angiograms, have clear foreground and
background objects. We give theoretical explanation to justify our methods. We
show the potential use of our method in clinical settings through evaluating
its efficacy for aiding the detection of lesions in coronary angiograms.
Related papers
- Semi-Supervised Graph Representation Learning with Human-centric
Explanation for Predicting Fatty Liver Disease [2.992602379681373]
This study explores the potential of graph representation learning within a semi-supervised learning framework.
Our approach constructs a subject similarity graph to identify risk patterns from health checkup data.
arXiv Detail & Related papers (2024-03-05T08:59:45Z) - MLIP: Enhancing Medical Visual Representation with Divergence Encoder
and Knowledge-guided Contrastive Learning [48.97640824497327]
We propose a novel framework leveraging domain-specific medical knowledge as guiding signals to integrate language information into the visual domain through image-text contrastive learning.
Our model includes global contrastive learning with our designed divergence encoder, local token-knowledge-patch alignment contrastive learning, and knowledge-guided category-level contrastive learning with expert knowledge.
Notably, MLIP surpasses state-of-the-art methods even with limited annotated data, highlighting the potential of multimodal pre-training in advancing medical representation learning.
arXiv Detail & Related papers (2024-02-03T05:48:50Z) - Cross-modal Clinical Graph Transformer for Ophthalmic Report Generation [116.87918100031153]
We propose a Cross-modal clinical Graph Transformer (CGT) for ophthalmic report generation (ORG)
CGT injects clinical relation triples into the visual features as prior knowledge to drive the decoding procedure.
Experiments on the large-scale FFA-IR benchmark demonstrate that the proposed CGT is able to outperform previous benchmark methods.
arXiv Detail & Related papers (2022-06-04T13:16:30Z) - Explainable and Interpretable Diabetic Retinopathy Classification Based
on Neural-Symbolic Learning [71.76441043692984]
We propose an explainable and interpretable diabetic retinopathy (ExplainDR) classification model based on neural-symbolic learning.
We introduce a human-readable symbolic representation, which follows a taxonomy style of diabetic retinopathy characteristics related to eye health conditions to achieve explainability.
arXiv Detail & Related papers (2022-04-01T00:54:12Z) - Leveraging Human Selective Attention for Medical Image Analysis with
Limited Training Data [72.1187887376849]
The selective attention mechanism helps the cognition system focus on task-relevant visual clues by ignoring the presence of distractors.
We propose a framework to leverage gaze for medical image analysis tasks with small training data.
Our method is demonstrated to achieve superior performance on both 3D tumor segmentation and 2D chest X-ray classification tasks.
arXiv Detail & Related papers (2021-12-02T07:55:25Z) - BI-RADS-Net: An Explainable Multitask Learning Approach for Cancer
Diagnosis in Breast Ultrasound Images [69.41441138140895]
This paper introduces BI-RADS-Net, a novel explainable deep learning approach for cancer detection in breast ultrasound images.
The proposed approach incorporates tasks for explaining and classifying breast tumors, by learning feature representations relevant to clinical diagnosis.
Explanations of the predictions (benign or malignant) are provided in terms of morphological features that are used by clinicians for diagnosis and reporting in medical practice.
arXiv Detail & Related papers (2021-10-05T19:14:46Z) - Improving Interpretability of Deep Neural Networks in Medical Diagnosis
by Investigating the Individual Units [24.761080054980713]
We demonstrate the efficiency of recent attribution techniques to explain the diagnostic decision by visualizing the significant factors in the input image.
Our analysis of unmasking machine intelligence represents the necessity of explainability in the medical diagnostic decision.
arXiv Detail & Related papers (2021-07-19T11:49:31Z) - Quantifying Explainability in NLP and Analyzing Algorithms for
Performance-Explainability Tradeoff [0.0]
We explore the current art of explainability and interpretability within a case study in clinical text classification.
We demonstrate various visualization techniques for fully interpretable methods as well as model-agnostic post hoc attributions.
We introduce a framework through which practitioners and researchers can assess the frontier between a model's predictive performance and the quality of its available explanations.
arXiv Detail & Related papers (2021-07-12T19:07:24Z) - DeepOpht: Medical Report Generation for Retinal Images via Deep Models
and Visual Explanation [24.701001374139047]
The proposed method is composed of a deep neural networks-based (DNN-based) module, including a retinal disease identifier and clinical description generator.
Our method is capable of creating meaningful retinal image descriptions and visual explanations that are clinically relevant.
arXiv Detail & Related papers (2020-11-01T17:28:12Z) - Explaining Clinical Decision Support Systems in Medical Imaging using
Cycle-Consistent Activation Maximization [112.2628296775395]
Clinical decision support using deep neural networks has become a topic of steadily growing interest.
clinicians are often hesitant to adopt the technology because its underlying decision-making process is considered to be intransparent and difficult to comprehend.
We propose a novel decision explanation scheme based on CycleGAN activation which generates high-quality visualizations of classifier decisions even in smaller data sets.
arXiv Detail & Related papers (2020-10-09T14:39:27Z) - On Interpretability of Deep Learning based Skin Lesion Classifiers using
Concept Activation Vectors [6.188009802619095]
We use a well-trained and high performing neural network for classification of three skin tumours, i.e. Melanocytic Naevi, Melanoma and Seborrheic Keratosis.
Human understandable concepts are mapped to RECOD image classification model with the help of Concept Activation Vectors (CAVs)
arXiv Detail & Related papers (2020-05-05T08:27:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.