Unmasking Dementia Detection by Masking Input Gradients: A JSM Approach
to Model Interpretability and Precision
- URL: http://arxiv.org/abs/2402.16008v1
- Date: Sun, 25 Feb 2024 06:53:35 GMT
- Title: Unmasking Dementia Detection by Masking Input Gradients: A JSM Approach
to Model Interpretability and Precision
- Authors: Yasmine Mustafa and Tie Luo
- Abstract summary: We introduce an interpretable, multimodal model for Alzheimer's disease (AD) classification over its multi-stage progression, incorporating Jacobian Saliency Map (JSM) as a modality-agnostic tool.
Our evaluation including ablation study manifests the efficacy of using JSM for model debug and interpretation, while significantly enhancing model accuracy as well.
- Score: 1.5501208213584152
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The evolution of deep learning and artificial intelligence has significantly
reshaped technological landscapes. However, their effective application in
crucial sectors such as medicine demands more than just superior performance,
but trustworthiness as well. While interpretability plays a pivotal role,
existing explainable AI (XAI) approaches often do not reveal {\em Clever Hans}
behavior where a model makes (ungeneralizable) correct predictions using
spurious correlations or biases in data. Likewise, current post-hoc XAI methods
are susceptible to generating unjustified counterfactual examples. In this
paper, we approach XAI with an innovative {\em model debugging} methodology
realized through Jacobian Saliency Map (JSM). To cast the problem into a
concrete context, we employ Alzheimer's disease (AD) diagnosis as the use case,
motivated by its significant impact on human lives and the formidable challenge
in its early detection, stemming from the intricate nature of its progression.
We introduce an interpretable, multimodal model for AD classification over its
multi-stage progression, incorporating JSM as a modality-agnostic tool that
provides insights into volumetric changes indicative of brain abnormalities.
Our extensive evaluation including ablation study manifests the efficacy of
using JSM for model debugging and interpretation, while significantly enhancing
model accuracy as well.
Related papers
- Unsupervised Model Diagnosis [49.36194740479798]
This paper proposes Unsupervised Model Diagnosis (UMO) to produce semantic counterfactual explanations without any user guidance.
Our approach identifies and visualizes changes in semantics, and then matches these changes to attributes from wide-ranging text sources.
arXiv Detail & Related papers (2024-10-08T17:59:03Z) - Analyzing the Effect of $k$-Space Features in MRI Classification Models [0.0]
We have developed an explainable AI methodology tailored for medical imaging.
We employ a Convolutional Neural Network (CNN) that analyzes MRI scans across both image and frequency domains.
This approach not only enhances early training efficiency but also deepens our understanding of how additional features impact the model predictions.
arXiv Detail & Related papers (2024-09-20T15:43:26Z) - An interpretable generative multimodal neuroimaging-genomics framework for decoding Alzheimer's disease [13.213387075528017]
Alzheimer's disease (AD) is the most prevalent form of dementia with a progressive decline in cognitive abilities.
In this study, we leveraged structural and functional MRI to investigate the disease-induced grey matter and functional network connectivity changes.
We propose a novel deep learning-based classification framework where generative module employing Cycle GANs was adopted to impute missing data within the latent space.
arXiv Detail & Related papers (2024-06-19T07:31:47Z) - SynthTree: Co-supervised Local Model Synthesis for Explainable Prediction [15.832975722301011]
We propose a novel method to enhance explainability with minimal accuracy loss.
We have developed novel methods for estimating nodes by leveraging AI techniques.
Our findings highlight the critical role that statistical methodologies can play in advancing explainable AI.
arXiv Detail & Related papers (2024-06-16T14:43:01Z) - Mitigating annotation shift in cancer classification using single image generative models [1.1864334278373239]
This study simulates, analyses and mitigates annotation shifts in cancer classification in the breast mammography domain.
We propose a training data augmentation approach based on single-image generative models for the affected class.
Our study offers key insights into annotation shift in deep learning breast cancer classification and explores the potential of single-image generative models to overcome domain shift challenges.
arXiv Detail & Related papers (2024-05-30T07:02:50Z) - MedDiffusion: Boosting Health Risk Prediction via Diffusion-based Data
Augmentation [58.93221876843639]
This paper introduces a novel, end-to-end diffusion-based risk prediction model, named MedDiffusion.
It enhances risk prediction performance by creating synthetic patient data during training to enlarge sample space.
It discerns hidden relationships between patient visits using a step-wise attention mechanism, enabling the model to automatically retain the most vital information for generating high-quality data.
arXiv Detail & Related papers (2023-10-04T01:36:30Z) - Explainable AI for Malnutrition Risk Prediction from m-Health and
Clinical Data [3.093890460224435]
This paper presents a novel AI framework for early and explainable malnutrition risk detection based on heterogeneous m-health data.
We performed an extensive model evaluation including both subject-independent and personalised predictions.
We also investigated several benchmark XAI methods to extract global model explanations.
arXiv Detail & Related papers (2023-05-31T08:07:35Z) - Leveraging Pretrained Representations with Task-related Keywords for
Alzheimer's Disease Detection [69.53626024091076]
Alzheimer's disease (AD) is particularly prominent in older adults.
Recent advances in pre-trained models motivate AD detection modeling to shift from low-level features to high-level representations.
This paper presents several efficient methods to extract better AD-related cues from high-level acoustic and linguistic features.
arXiv Detail & Related papers (2023-03-14T16:03:28Z) - Safe AI for health and beyond -- Monitoring to transform a health
service [51.8524501805308]
We will assess the infrastructure required to monitor the outputs of a machine learning algorithm.
We will present two scenarios with examples of monitoring and updates of models.
arXiv Detail & Related papers (2023-03-02T17:27:45Z) - Integrating Expert ODEs into Neural ODEs: Pharmacology and Disease
Progression [71.7560927415706]
latent hybridisation model (LHM) integrates a system of expert-designed ODEs with machine-learned Neural ODEs to fully describe the dynamics of the system.
We evaluate LHM on synthetic data as well as real-world intensive care data of COVID-19 patients.
arXiv Detail & Related papers (2021-06-05T11:42:45Z) - Adversarial Sample Enhanced Domain Adaptation: A Case Study on
Predictive Modeling with Electronic Health Records [57.75125067744978]
We propose a data augmentation method to facilitate domain adaptation.
adversarially generated samples are used during domain adaptation.
Results confirm the effectiveness of our method and the generality on different tasks.
arXiv Detail & Related papers (2021-01-13T03:20:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.