Enhancing Deep Learning Model Explainability in Brain Tumor Datasets using Post-Heuristic Approaches
- URL: http://arxiv.org/abs/2404.19568v1
- Date: Tue, 30 Apr 2024 13:59:13 GMT
- Title: Enhancing Deep Learning Model Explainability in Brain Tumor Datasets using Post-Heuristic Approaches
- Authors: Konstantinos Pasvantis, Eftychios Protopapadakis,
- Abstract summary: This study addresses the inherent lack of explainability during decision-making processes.
The primary focus is directed towards refining the explanations generated by the LIME Library and LIME image explainer.
Our proposed post-heuristic approach demonstrates significant advancements, yielding more robust and concrete results.
- Score: 1.325953054381901
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The application of deep learning models in medical diagnosis has showcased considerable efficacy in recent years. Nevertheless, a notable limitation involves the inherent lack of explainability during decision-making processes. This study addresses such a constraint, by enhancing the interpretability robustness. The primary focus is directed towards refining the explanations generated by the LIME Library and LIME image explainer. This is achieved throuhg post-processing mechanisms, based on scenario-specific rules. Multiple experiments have been conducted using publicly accessible datasets related to brain tumor detection. Our proposed post-heuristic approach demonstrates significant advancements, yielding more robust and concrete results, in the context of medical diagnosis.
Related papers
- Optimizing Skin Lesion Classification via Multimodal Data and Auxiliary
Task Integration [54.76511683427566]
This research introduces a novel multimodal method for classifying skin lesions, integrating smartphone-captured images with essential clinical and demographic information.
A distinctive aspect of this method is the integration of an auxiliary task focused on super-resolution image prediction.
The experimental evaluations have been conducted using the PAD-UFES20 dataset, applying various deep-learning architectures.
arXiv Detail & Related papers (2024-02-16T05:16:20Z) - A Quantitatively Interpretable Model for Alzheimer's Disease Prediction
Using Deep Counterfactuals [9.063447605302219]
Our framework produces an AD-relatedness index'' for each region of the brain.
It offers an intuitive understanding of brain status for an individual patient and across patient groups with respect to Alzheimer's disease (AD) progression.
arXiv Detail & Related papers (2023-10-05T10:55:10Z) - UniBrain: Universal Brain MRI Diagnosis with Hierarchical
Knowledge-enhanced Pre-training [66.16134293168535]
We propose a hierarchical knowledge-enhanced pre-training framework for the universal brain MRI diagnosis, termed as UniBrain.
Specifically, UniBrain leverages a large-scale dataset of 24,770 imaging-report pairs from routine diagnostics.
arXiv Detail & Related papers (2023-09-13T09:22:49Z) - Beyond Known Reality: Exploiting Counterfactual Explanations for Medical
Research [1.6574413179773761]
Our study uses counterfactual explanations to explore the applicability of "what if?" scenarios in medical research.
Our aim is to expand our understanding of magnetic resonance imaging (MRI) features used for diagnosing pediatric posterior fossa brain tumors.
arXiv Detail & Related papers (2023-07-05T09:14:09Z) - Multimodal Explainability via Latent Shift applied to COVID-19 stratification [0.7831774233149619]
We present a deep architecture, which jointly learns modality reconstructions and sample classifications.
We validate our approach in the context of COVID-19 pandemic using the AIforCOVID dataset.
arXiv Detail & Related papers (2022-12-28T20:07:43Z) - Benchmarking Heterogeneous Treatment Effect Models through the Lens of
Interpretability [82.29775890542967]
Estimating personalized effects of treatments is a complex, yet pervasive problem.
Recent developments in the machine learning literature on heterogeneous treatment effect estimation gave rise to many sophisticated, but opaque, tools.
We use post-hoc feature importance methods to identify features that influence the model's predictions.
arXiv Detail & Related papers (2022-06-16T17:59:05Z) - Explainable Deep Learning Methods in Medical Image Classification: A
Survey [0.0]
State-of-the-art deep learning models have achieved human-level accuracy on the classification of different types of medical data.
These models are hardly adopted in clinical, mainly due to their lack of interpretability.
The black-box-ness of deep learning models has raised the need for devising strategies to explain the decision process of these models.
arXiv Detail & Related papers (2022-05-10T09:28:14Z) - SSD-KD: A Self-supervised Diverse Knowledge Distillation Method for
Lightweight Skin Lesion Classification Using Dermoscopic Images [62.60956024215873]
Skin cancer is one of the most common types of malignancy, affecting a large population and causing a heavy economic burden worldwide.
Most studies in skin cancer detection keep pursuing high prediction accuracies without considering the limitation of computing resources on portable devices.
This study specifically proposes a novel method, termed SSD-KD, that unifies diverse knowledge into a generic KD framework for skin diseases classification.
arXiv Detail & Related papers (2022-03-22T06:54:29Z) - MMLN: Leveraging Domain Knowledge for Multimodal Diagnosis [10.133715767542386]
We propose a knowledge-driven and data-driven framework for lung disease diagnosis.
We formulate diagnosis rules according to authoritative clinical medicine guidelines and learn the weights of rules from text data.
A multimodal fusion consisting of text and image data is designed to infer the marginal probability of lung disease.
arXiv Detail & Related papers (2022-02-09T04:12:30Z) - Cross-Modality Deep Feature Learning for Brain Tumor Segmentation [158.8192041981564]
This paper proposes a novel cross-modality deep feature learning framework to segment brain tumors from the multi-modality MRI data.
The core idea is to mine rich patterns across the multi-modality data to make up for the insufficient data scale.
Comprehensive experiments are conducted on the BraTS benchmarks, which show that the proposed cross-modality deep feature learning framework can effectively improve the brain tumor segmentation performance.
arXiv Detail & Related papers (2022-01-07T07:46:01Z) - Explaining Clinical Decision Support Systems in Medical Imaging using
Cycle-Consistent Activation Maximization [112.2628296775395]
Clinical decision support using deep neural networks has become a topic of steadily growing interest.
clinicians are often hesitant to adopt the technology because its underlying decision-making process is considered to be intransparent and difficult to comprehend.
We propose a novel decision explanation scheme based on CycleGAN activation which generates high-quality visualizations of classifier decisions even in smaller data sets.
arXiv Detail & Related papers (2020-10-09T14:39:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.