Unifying Interpretability and Explainability for Alzheimer's Disease Progression Prediction
- URL: http://arxiv.org/abs/2406.07777v1
- Date: Tue, 11 Jun 2024 23:54:42 GMT
- Title: Unifying Interpretability and Explainability for Alzheimer's Disease Progression Prediction
- Authors: Raja Farrukh Ali, Stephanie Milani, John Woods, Emmanuel Adenij, Ayesha Farooq, Clayton Mansel, Jeffrey Burns, William Hsu,
- Abstract summary: Reinforcement learning has recently shown promise in predicting Alzheimer's disease (AD) progression.
It is not clear which RL algorithms are well-suited for this task.
Our work aims to merge predictive accuracy with transparency, assisting clinicians and researchers in enhancing disease progression modeling.
- Score: 6.582683443485416
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Reinforcement learning (RL) has recently shown promise in predicting Alzheimer's disease (AD) progression due to its unique ability to model domain knowledge. However, it is not clear which RL algorithms are well-suited for this task. Furthermore, these methods are not inherently explainable, limiting their applicability in real-world clinical scenarios. Our work addresses these two important questions. Using a causal, interpretable model of AD, we first compare the performance of four contemporary RL algorithms in predicting brain cognition over 10 years using only baseline (year 0) data. We then apply SHAP (SHapley Additive exPlanations) to explain the decisions made by each algorithm in the model. Our approach combines interpretability with explainability to provide insights into the key factors influencing AD progression, offering both global and individual, patient-level analysis. Our findings show that only one of the RL methods is able to satisfactorily model disease progression, but the post-hoc explanations indicate that all methods fail to properly capture the importance of amyloid accumulation, one of the pathological hallmarks of Alzheimer's disease. Our work aims to merge predictive accuracy with transparency, assisting clinicians and researchers in enhancing disease progression modeling for informed healthcare decisions. Code is available at https://github.com/rfali/xrlad.
Related papers
- Towards Within-Class Variation in Alzheimer's Disease Detection from Spontaneous Speech [60.08015780474457]
Alzheimer's Disease (AD) detection has emerged as a promising research area that employs machine learning classification models.
We identify within-class variation as a critical challenge in AD detection: individuals with AD exhibit a spectrum of cognitive impairments.
We propose two novel methods: Soft Target Distillation (SoTD) and Instance-level Re-balancing (InRe), targeting two problems respectively.
arXiv Detail & Related papers (2024-09-22T02:06:05Z) - Intelligent Diagnosis of Alzheimer's Disease Based on Machine Learning [24.467566885575998]
This study is based on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset.
It aims to explore early detection and disease progression in Alzheimer's disease (AD)
arXiv Detail & Related papers (2024-02-13T15:43:30Z) - DDxT: Deep Generative Transformer Models for Differential Diagnosis [51.25660111437394]
We show that a generative approach trained with simpler supervised and self-supervised learning signals can achieve superior results on the current benchmark.
The proposed Transformer-based generative network, named DDxT, autoregressively produces a set of possible pathologies, i.e., DDx, and predicts the actual pathology using a neural network.
arXiv Detail & Related papers (2023-12-02T22:57:25Z) - Understanding, Predicting and Better Resolving Q-Value Divergence in
Offline-RL [86.0987896274354]
We first identify a fundamental pattern, self-excitation, as the primary cause of Q-value estimation divergence in offline RL.
We then propose a novel Self-Excite Eigenvalue Measure (SEEM) metric to measure the evolving property of Q-network at training.
For the first time, our theory can reliably decide whether the training will diverge at an early stage.
arXiv Detail & Related papers (2023-10-06T17:57:44Z) - A Quantitatively Interpretable Model for Alzheimer's Disease Prediction
Using Deep Counterfactuals [9.063447605302219]
Our framework produces an AD-relatedness index'' for each region of the brain.
It offers an intuitive understanding of brain status for an individual patient and across patient groups with respect to Alzheimer's disease (AD) progression.
arXiv Detail & Related papers (2023-10-05T10:55:10Z) - Deep Reinforcement Learning Framework for Thoracic Diseases
Classification via Prior Knowledge Guidance [49.87607548975686]
The scarcity of labeled data for related diseases poses a huge challenge to an accurate diagnosis.
We propose a novel deep reinforcement learning framework, which introduces prior knowledge to direct the learning of diagnostic agents.
Our approach's performance was demonstrated using the well-known NIHX-ray 14 and CheXpert datasets.
arXiv Detail & Related papers (2023-06-02T01:46:31Z) - An Empirical Comparison of Explainable Artificial Intelligence Methods
for Clinical Data: A Case Study on Traumatic Brain Injury [8.913544654492696]
We implement two prediction models for short- and long-term outcomes of traumatic brain injury.
Six different interpretation techniques were used to describe both prediction models at the local and global levels.
The implemented methods were compared to one another in terms of several XAI characteristics such as understandability, fidelity, and stability.
arXiv Detail & Related papers (2022-08-13T19:44:00Z) - Reinforcement Learning based Disease Progression Model for Alzheimer's
Disease [3.1224202646855894]
We model Alzheimer's disease (AD) progression by combining differential equations (DEs) and reinforcement learning (RL) with domain knowledge.
We use our model consisting of DEs (as a simulator) and the trained RL agent to predict individualized 10-year AD progression.
Our framework combines DEs with RL for modelling AD progression and has broad applicability for understanding other neurological disorders.
arXiv Detail & Related papers (2021-06-30T16:32:12Z) - UNITE: Uncertainty-based Health Risk Prediction Leveraging Multi-sourced
Data [81.00385374948125]
We present UNcertaInTy-based hEalth risk prediction (UNITE) model.
UNITE provides accurate disease risk prediction and uncertainty estimation leveraging multi-sourced health data.
We evaluate UNITE on real-world disease risk prediction tasks: nonalcoholic fatty liver disease (NASH) and Alzheimer's disease (AD)
UNITE achieves up to 0.841 in F1 score for AD detection, up to 0.609 in PR-AUC for NASH detection, and outperforms various state-of-the-art baselines by up to $19%$ over the best baseline.
arXiv Detail & Related papers (2020-10-22T02:28:11Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z) - Application of Machine Learning to Predict the Risk of Alzheimer's
Disease: An Accurate and Practical Solution for Early Diagnostics [1.1470070927586016]
Alzheimer's Disease (AD) ravages the cognitive ability of more than 5 million Americans and creates an enormous strain on the health care system.
This paper proposes a machine learning predictive model for AD development without medical imaging and with fewer clinical visits and tests.
Our model is trained and validated using demographic, biomarker and cognitive test data from two prominent research studies.
arXiv Detail & Related papers (2020-06-02T14:52:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.