SHAMSUL: Systematic Holistic Analysis to investigate Medical
Significance Utilizing Local interpretability methods in deep learning for
chest radiography pathology prediction
- URL: http://arxiv.org/abs/2307.08003v2
- Date: Fri, 17 Nov 2023 18:47:42 GMT
- Title: SHAMSUL: Systematic Holistic Analysis to investigate Medical
Significance Utilizing Local interpretability methods in deep learning for
chest radiography pathology prediction
- Authors: Mahbub Ul Alam, Jaakko Hollm\'en, J\'on R\'unar Baldvinsson, Rahim
Rahmani
- Abstract summary: The study delves into the application of four well-established interpretability methods: Local Interpretable Model-agnostic Explanations (LIME), Shapley Additive exPlanations (SHAP), Gradient-weighted Class Activation Mapping (Grad-CAM) and Layer-wise Relevance Propagation (LRP)
Our analysis encompasses both single-label and multi-label predictions, providing a comprehensive and unbiased assessment through quantitative and qualitative investigations, which are compared against human expert annotation.
- Score: 1.0138723409205497
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The interpretability of deep neural networks has become a subject of great
interest within the medical and healthcare domain. This attention stems from
concerns regarding transparency, legal and ethical considerations, and the
medical significance of predictions generated by these deep neural networks in
clinical decision support systems. To address this matter, our study delves
into the application of four well-established interpretability methods: Local
Interpretable Model-agnostic Explanations (LIME), Shapley Additive exPlanations
(SHAP), Gradient-weighted Class Activation Mapping (Grad-CAM), and Layer-wise
Relevance Propagation (LRP). Leveraging the approach of transfer learning with
a multi-label-multi-class chest radiography dataset, we aim to interpret
predictions pertaining to specific pathology classes. Our analysis encompasses
both single-label and multi-label predictions, providing a comprehensive and
unbiased assessment through quantitative and qualitative investigations, which
are compared against human expert annotation. Notably, Grad-CAM demonstrates
the most favorable performance in quantitative evaluation, while the LIME
heatmap score segmentation visualization exhibits the highest level of medical
significance. Our research underscores both the outcomes and the challenges
faced in the holistic approach adopted for assessing these interpretability
methods and suggests that a multimodal-based approach, incorporating diverse
sources of information beyond chest radiography images, could offer additional
insights for enhancing interpretability in the medical domain.
Related papers
- Towards Multi-dimensional Explanation Alignment for Medical Classification [16.799101204390457]
We propose a novel framework called Med-MICN (Medical Multi-dimensional Interpretable Concept Network)
Med-MICN provides interpretability alignment for various angles, including neural symbolic reasoning, concept semantics, and saliency maps.
Its advantages include high prediction accuracy, interpretability across multiple dimensions, and automation through an end-to-end concept labeling process.
arXiv Detail & Related papers (2024-10-28T20:03:19Z) - MLIP: Enhancing Medical Visual Representation with Divergence Encoder
and Knowledge-guided Contrastive Learning [48.97640824497327]
We propose a novel framework leveraging domain-specific medical knowledge as guiding signals to integrate language information into the visual domain through image-text contrastive learning.
Our model includes global contrastive learning with our designed divergence encoder, local token-knowledge-patch alignment contrastive learning, and knowledge-guided category-level contrastive learning with expert knowledge.
Notably, MLIP surpasses state-of-the-art methods even with limited annotated data, highlighting the potential of multimodal pre-training in advancing medical representation learning.
arXiv Detail & Related papers (2024-02-03T05:48:50Z) - XAI for In-hospital Mortality Prediction via Multimodal ICU Data [57.73357047856416]
We propose an efficient, explainable AI solution for predicting in-hospital mortality via multimodal ICU data.
We employ multimodal learning in our framework, which can receive heterogeneous inputs from clinical data and make decisions.
Our framework can be easily transferred to other clinical tasks, which facilitates the discovery of crucial factors in healthcare research.
arXiv Detail & Related papers (2023-12-29T14:28:04Z) - Multi-task Explainable Skin Lesion Classification [54.76511683427566]
We propose a few-shot-based approach for skin lesions that generalizes well with few labelled data.
The proposed approach comprises a fusion of a segmentation network that acts as an attention module and classification network.
arXiv Detail & Related papers (2023-10-11T05:49:47Z) - Validating polyp and instrument segmentation methods in colonoscopy through Medico 2020 and MedAI 2021 Challenges [58.32937972322058]
"Medico automatic polyp segmentation (Medico 2020)" and "MedAI: Transparency in Medical Image (MedAI 2021)" competitions.
We present a comprehensive summary and analyze each contribution, highlight the strength of the best-performing methods, and discuss the possibility of clinical translations of such methods into the clinic.
arXiv Detail & Related papers (2023-07-30T16:08:45Z) - A Reliable and Interpretable Framework of Multi-view Learning for Liver
Fibrosis Staging [13.491056805108183]
Staging of liver fibrosis is important in the diagnosis and treatment planning of patients suffering from liver diseases.
Current deep learning-based methods using abdominal magnetic resonance imaging (MRI) usually take a sub-region of the liver as an input.
We formulate this task as a multi-view learning problem and employ multiple sub-regions of the liver.
arXiv Detail & Related papers (2023-06-21T06:53:51Z) - Multimodal Explainability via Latent Shift applied to COVID-19 stratification [0.7831774233149619]
We present a deep architecture, which jointly learns modality reconstructions and sample classifications.
We validate our approach in the context of COVID-19 pandemic using the AIforCOVID dataset.
arXiv Detail & Related papers (2022-12-28T20:07:43Z) - Visual Interpretable and Explainable Deep Learning Models for Brain
Tumor MRI and COVID-19 Chest X-ray Images [0.0]
We evaluate attribution methods for illuminating how deep neural networks analyze medical images.
We attribute predictions from brain tumor MRI and COVID-19 chest X-ray datasets made by recent deep convolutional neural network models.
arXiv Detail & Related papers (2022-08-01T16:05:14Z) - Quantifying Explainability in NLP and Analyzing Algorithms for
Performance-Explainability Tradeoff [0.0]
We explore the current art of explainability and interpretability within a case study in clinical text classification.
We demonstrate various visualization techniques for fully interpretable methods as well as model-agnostic post hoc attributions.
We introduce a framework through which practitioners and researchers can assess the frontier between a model's predictive performance and the quality of its available explanations.
arXiv Detail & Related papers (2021-07-12T19:07:24Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - Explaining Clinical Decision Support Systems in Medical Imaging using
Cycle-Consistent Activation Maximization [112.2628296775395]
Clinical decision support using deep neural networks has become a topic of steadily growing interest.
clinicians are often hesitant to adopt the technology because its underlying decision-making process is considered to be intransparent and difficult to comprehend.
We propose a novel decision explanation scheme based on CycleGAN activation which generates high-quality visualizations of classifier decisions even in smaller data sets.
arXiv Detail & Related papers (2020-10-09T14:39:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.