Learn-Explain-Reinforce: Counterfactual Reasoning and Its Guidance to
Reinforce an Alzheimer's Disease Diagnosis Model
- URL: http://arxiv.org/abs/2108.09451v1
- Date: Sat, 21 Aug 2021 07:29:13 GMT
- Title: Learn-Explain-Reinforce: Counterfactual Reasoning and Its Guidance to
Reinforce an Alzheimer's Disease Diagnosis Model
- Authors: Kwanseok Oh, Jee Seok Yoon, and Heung-Il Suk
- Abstract summary: We propose a novel framework that unifies diagnostic model learning, visual explanation generation, and trained diagnostic model reinforcement.
For the visual explanation, we generate a counterfactual map that transforms an input sample to be identified as a target label.
- Score: 1.6287500717172143
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing studies on disease diagnostic models focus either on diagnostic
model learning for performance improvement or on the visual explanation of a
trained diagnostic model. We propose a novel learn-explain-reinforce (LEAR)
framework that unifies diagnostic model learning, visual explanation generation
(explanation unit), and trained diagnostic model reinforcement (reinforcement
unit) guided by the visual explanation. For the visual explanation, we generate
a counterfactual map that transforms an input sample to be identified as an
intended target label. For example, a counterfactual map can localize
hypothetical abnormalities within a normal brain image that may cause it to be
diagnosed with Alzheimer's disease (AD). We believe that the generated
counterfactual maps represent data-driven and model-induced knowledge about a
target task, i.e., AD diagnosis using structural MRI, which can be a vital
source of information to reinforce the generalization of the trained diagnostic
model. To this end, we devise an attention-based feature refinement module with
the guidance of the counterfactual maps. The explanation and reinforcement
units are reciprocal and can be operated iteratively. Our proposed approach was
validated via qualitative and quantitative analysis on the ADNI dataset. Its
comprehensibility and fidelity were demonstrated through ablation studies and
comparisons with existing methods.
Related papers
- Explainable AI for Autism Diagnosis: Identifying Critical Brain Regions Using fMRI Data [0.29687381456163997]
Early diagnosis and intervention for Autism Spectrum Disorder (ASD) has been shown to significantly improve the quality of life of autistic individuals.
There is a need for objective biomarkers of ASD which can help improve diagnostic accuracy.
Deep learning (DL) has achieved outstanding performance in diagnosing diseases and conditions from medical imaging data.
This research aims to improve the accuracy and interpretability of ASD diagnosis by creating a DL model that can not only accurately classify ASD but also provide explainable insights into its working.
arXiv Detail & Related papers (2024-09-19T23:08:09Z) - A Quantitative Approach for Evaluating Disease Focus and Interpretability of Deep Learning Models for Alzheimer's Disease Classification [17.549219224802]
Deep learning (DL) models have shown significant potential in Alzheimer's Disease (AD) classification.
We developed a quantitative disease-focusing strategy to enhance the interpretability of DL models.
We evaluated these models in terms of their abilities to focus on disease-relevant regions.
arXiv Detail & Related papers (2024-09-07T19:16:40Z) - A Survey of Models for Cognitive Diagnosis: New Developments and Future Directions [66.40362209055023]
This paper aims to provide a survey of current models for cognitive diagnosis, with more attention on new developments using machine learning-based methods.
By comparing the model structures, parameter estimation algorithms, model evaluation methods and applications, we provide a relatively comprehensive review of the recent trends in cognitive diagnosis models.
arXiv Detail & Related papers (2024-07-07T18:02:00Z) - Towards the Identifiability and Explainability for Personalized Learner
Modeling: An Inductive Paradigm [36.60917255464867]
We propose an identifiable cognitive diagnosis framework (ID-CDF) based on a novel response-proficiency-response paradigm inspired by encoder-decoder models.
We show that ID-CDF can effectively address the problems without loss of diagnosis preciseness.
arXiv Detail & Related papers (2023-09-01T07:18:02Z) - Deep Reinforcement Learning Framework for Thoracic Diseases
Classification via Prior Knowledge Guidance [49.87607548975686]
The scarcity of labeled data for related diseases poses a huge challenge to an accurate diagnosis.
We propose a novel deep reinforcement learning framework, which introduces prior knowledge to direct the learning of diagnostic agents.
Our approach's performance was demonstrated using the well-known NIHX-ray 14 and CheXpert datasets.
arXiv Detail & Related papers (2023-06-02T01:46:31Z) - Pixel-Level Explanation of Multiple Instance Learning Models in
Biomedical Single Cell Images [52.527733226555206]
We investigate the use of four attribution methods to explain a multiple instance learning models.
We study two datasets of acute myeloid leukemia with over 100 000 single cell images.
We compare attribution maps with the annotations of a medical expert to see how the model's decision-making differs from the human standard.
arXiv Detail & Related papers (2023-03-15T14:00:11Z) - Patched Diffusion Models for Unsupervised Anomaly Detection in Brain MRI [55.78588835407174]
We propose a method that reformulates the generation task of diffusion models as a patch-based estimation of healthy brain anatomy.
We evaluate our approach on data of tumors and multiple sclerosis lesions and demonstrate a relative improvement of 25.1% compared to existing baselines.
arXiv Detail & Related papers (2023-03-07T09:40:22Z) - XADLiME: eXplainable Alzheimer's Disease Likelihood Map Estimation via
Clinically-guided Prototype Learning [3.286378299443229]
We propose a novel deep-learning approach through XADLiME for AD progression modeling over 3D sMRIs.
Specifically, we establish a set of topologically-aware prototypes onto the clusters of latent clinical features, uncovering an AD spectrum manifold.
We then measure the similarities between latent clinical features and well-established prototypes, estimating a "pseudo" likelihood map.
arXiv Detail & Related papers (2022-07-27T00:25:55Z) - Variational Knowledge Distillation for Disease Classification in Chest
X-Rays [102.04931207504173]
We propose itvariational knowledge distillation (VKD), which is a new probabilistic inference framework for disease classification based on X-rays.
We demonstrate the effectiveness of our method on three public benchmark datasets with paired X-ray images and EHRs.
arXiv Detail & Related papers (2021-03-19T14:13:56Z) - Interpretation of Brain Morphology in Association to Alzheimer's Disease
Dementia Classification Using Graph Convolutional Networks on Triangulated
Meshes [6.088308871328403]
We propose a mesh-based technique to aid in the classification of Alzheimer's disease dementia (ADD) using mesh representations of the cortex and subcortical structures.
We outperform other machine learning methods with a 96.35% testing accuracy for the ADD vs. healthy control problem.
arXiv Detail & Related papers (2020-08-14T01:10:39Z) - Dynamic Graph Correlation Learning for Disease Diagnosis with Incomplete
Labels [66.57101219176275]
Disease diagnosis on chest X-ray images is a challenging multi-label classification task.
We propose a Disease Diagnosis Graph Convolutional Network (DD-GCN) that presents a novel view of investigating the inter-dependency among different diseases.
Our method is the first to build a graph over the feature maps with a dynamic adjacency matrix for correlation learning.
arXiv Detail & Related papers (2020-02-26T17:10:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.