A Quantitatively Interpretable Model for Alzheimer's Disease Prediction
Using Deep Counterfactuals
- URL: http://arxiv.org/abs/2310.03457v1
- Date: Thu, 5 Oct 2023 10:55:10 GMT
- Title: A Quantitatively Interpretable Model for Alzheimer's Disease Prediction
Using Deep Counterfactuals
- Authors: Kwanseok Oh, Da-Woon Heo, Ahmad Wisnu Mulyadi, Wonsik Jung, Eunsong
Kang, Kun Ho Lee, Heung-Il Suk
- Abstract summary: Our framework produces an AD-relatedness index'' for each region of the brain.
It offers an intuitive understanding of brain status for an individual patient and across patient groups with respect to Alzheimer's disease (AD) progression.
- Score: 9.063447605302219
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning (DL) for predicting Alzheimer's disease (AD) has provided
timely intervention in disease progression yet still demands attentive
interpretability to explain how their DL models make definitive decisions.
Recently, counterfactual reasoning has gained increasing attention in medical
research because of its ability to provide a refined visual explanatory map.
However, such visual explanatory maps based on visual inspection alone are
insufficient unless we intuitively demonstrate their medical or neuroscientific
validity via quantitative features. In this study, we synthesize the
counterfactual-labeled structural MRIs using our proposed framework and
transform it into a gray matter density map to measure its volumetric changes
over the parcellated region of interest (ROI). We also devised a lightweight
linear classifier to boost the effectiveness of constructed ROIs, promoted
quantitative interpretation, and achieved comparable predictive performance to
DL methods. Throughout this, our framework produces an ``AD-relatedness index''
for each ROI and offers an intuitive understanding of brain status for an
individual patient and across patient groups with respect to AD progression.
Related papers
- Reasoning-Enhanced Healthcare Predictions with Knowledge Graph Community Retrieval [61.70489848327436]
KARE is a novel framework that integrates knowledge graph (KG) community-level retrieval with large language models (LLMs) reasoning.
Extensive experiments demonstrate that KARE outperforms leading models by up to 10.8-15.0% on MIMIC-III and 12.6-12.7% on MIMIC-IV for mortality and readmission predictions.
arXiv Detail & Related papers (2024-10-06T18:46:28Z) - Explainable Biomedical Hypothesis Generation via Retrieval Augmented Generation enabled Large Language Models [46.05020842978823]
Large Language Models (LLMs) have emerged as powerful tools to navigate this complex data landscape.
RAGGED is a comprehensive workflow designed to support investigators with knowledge integration and hypothesis generation.
arXiv Detail & Related papers (2024-07-17T07:44:18Z) - An interpretable generative multimodal neuroimaging-genomics framework for decoding Alzheimer's disease [13.213387075528017]
Alzheimer's disease (AD) is the most prevalent form of dementia with a progressive decline in cognitive abilities.
We leveraged structural and functional MRI to investigate the disease-induced GM and functional network connectivity changes.
We propose a novel DL-based classification framework where a generative module employing Cycle GAN was adopted for imputing missing data.
arXiv Detail & Related papers (2024-06-19T07:31:47Z) - Unifying Interpretability and Explainability for Alzheimer's Disease Progression Prediction [6.582683443485416]
Reinforcement learning has recently shown promise in predicting Alzheimer's disease (AD) progression.
It is not clear which RL algorithms are well-suited for this task.
Our work aims to merge predictive accuracy with transparency, assisting clinicians and researchers in enhancing disease progression modeling.
arXiv Detail & Related papers (2024-06-11T23:54:42Z) - Enhancing Deep Learning Model Explainability in Brain Tumor Datasets using Post-Heuristic Approaches [1.325953054381901]
This study addresses the inherent lack of explainability during decision-making processes.
The primary focus is directed towards refining the explanations generated by the LIME Library and LIME image explainer.
Our proposed post-heuristic approach demonstrates significant advancements, yielding more robust and concrete results.
arXiv Detail & Related papers (2024-04-30T13:59:13Z) - Unmasking Dementia Detection by Masking Input Gradients: A JSM Approach
to Model Interpretability and Precision [1.5501208213584152]
We introduce an interpretable, multimodal model for Alzheimer's disease (AD) classification over its multi-stage progression, incorporating Jacobian Saliency Map (JSM) as a modality-agnostic tool.
Our evaluation including ablation study manifests the efficacy of using JSM for model debug and interpretation, while significantly enhancing model accuracy as well.
arXiv Detail & Related papers (2024-02-25T06:53:35Z) - Patched Diffusion Models for Unsupervised Anomaly Detection in Brain MRI [55.78588835407174]
We propose a method that reformulates the generation task of diffusion models as a patch-based estimation of healthy brain anatomy.
We evaluate our approach on data of tumors and multiple sclerosis lesions and demonstrate a relative improvement of 25.1% compared to existing baselines.
arXiv Detail & Related papers (2023-03-07T09:40:22Z) - NeuroExplainer: Fine-Grained Attention Decoding to Uncover Cortical
Development Patterns of Preterm Infants [73.85768093666582]
We propose an explainable geometric deep network dubbed NeuroExplainer.
NeuroExplainer is used to uncover altered infant cortical development patterns associated with preterm birth.
arXiv Detail & Related papers (2023-01-01T12:48:12Z) - Variational Knowledge Distillation for Disease Classification in Chest
X-Rays [102.04931207504173]
We propose itvariational knowledge distillation (VKD), which is a new probabilistic inference framework for disease classification based on X-rays.
We demonstrate the effectiveness of our method on three public benchmark datasets with paired X-ray images and EHRs.
arXiv Detail & Related papers (2021-03-19T14:13:56Z) - Interpretable Deep Models for Cardiac Resynchronisation Therapy Response
Prediction [8.152884957975354]
We propose a novel framework for image-based classification based on a variational autoencoder (VAE)
The VAE disentangles the latent space based on explanations' drawn from existing clinical knowledge.
We demonstrate our framework on the problem of predicting response of patients with cardiomyopathy to cardiac resynchronization therapy (CRT) from cine cardiac magnetic resonance images.
arXiv Detail & Related papers (2020-06-24T15:35:47Z) - Learning Dynamic and Personalized Comorbidity Networks from Event Data
using Deep Diffusion Processes [102.02672176520382]
Comorbid diseases co-occur and progress via complex temporal patterns that vary among individuals.
In electronic health records we can observe the different diseases a patient has, but can only infer the temporal relationship between each co-morbid condition.
We develop deep diffusion processes to model "dynamic comorbidity networks"
arXiv Detail & Related papers (2020-01-08T15:47:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.