Looking deeper into interpretable deep learning in neuroimaging: a
comprehensive survey
- URL: http://arxiv.org/abs/2307.09615v1
- Date: Fri, 14 Jul 2023 04:50:04 GMT
- Title: Looking deeper into interpretable deep learning in neuroimaging: a
comprehensive survey
- Authors: Md. Mahfuzur Rahman, Vince D. Calhoun, Sergey M. Plis
- Abstract summary: This paper comprehensively reviews interpretable deep learning models in the neuroimaging domain.
We discuss how multiple recent neuroimaging studies leveraged model interpretability to capture anatomical and functional brain alterations most relevant to model predictions.
- Score: 20.373311465258393
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning (DL) models have been popular due to their ability to learn
directly from the raw data in an end-to-end paradigm, alleviating the concern
of a separate error-prone feature extraction phase. Recent DL-based
neuroimaging studies have also witnessed a noticeable performance advancement
over traditional machine learning algorithms. But the challenges of deep
learning models still exist because of the lack of transparency in these models
for their successful deployment in real-world applications. In recent years,
Explainable AI (XAI) has undergone a surge of developments mainly to get
intuitions of how the models reached the decisions, which is essential for
safety-critical domains such as healthcare, finance, and law enforcement
agencies. While the interpretability domain is advancing noticeably,
researchers are still unclear about what aspect of model learning a post hoc
method reveals and how to validate its reliability. This paper comprehensively
reviews interpretable deep learning models in the neuroimaging domain. Firstly,
we summarize the current status of interpretability resources in general,
focusing on the progression of methods, associated challenges, and opinions.
Secondly, we discuss how multiple recent neuroimaging studies leveraged model
interpretability to capture anatomical and functional brain alterations most
relevant to model predictions. Finally, we discuss the limitations of the
current practices and offer some valuable insights and guidance on how we can
steer our future research directions to make deep learning models substantially
interpretable and thus advance scientific understanding of brain disorders.
Related papers
- Advancing Brain Imaging Analysis Step-by-step via Progressive Self-paced Learning [0.5840945370755134]
We introduce the Progressive Self-Paced Distillation (PSPD) framework, employing an adaptive and progressive pacing and distillation mechanism.
We validate PSPD's efficacy and adaptability across various convolutional neural networks using the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset.
arXiv Detail & Related papers (2024-07-23T02:26:04Z) - Explain To Decide: A Human-Centric Review on the Role of Explainable
Artificial Intelligence in AI-assisted Decision Making [1.0878040851638]
Machine learning models are error-prone and cannot be used autonomously.
Explainable Artificial Intelligence (XAI) aids end-user understanding of the model.
This paper surveyed the recent empirical studies on XAI's impact on human-AI decision-making.
arXiv Detail & Related papers (2023-12-11T22:35:21Z) - Continual Learning with Bayesian Model based on a Fixed Pre-trained
Feature Extractor [55.9023096444383]
Current deep learning models are characterised by catastrophic forgetting of old knowledge when learning new classes.
Inspired by the process of learning new knowledge in human brains, we propose a Bayesian generative model for continual learning.
arXiv Detail & Related papers (2022-04-28T08:41:51Z) - Causal Reasoning Meets Visual Representation Learning: A Prospective
Study [117.08431221482638]
Lack of interpretability, robustness, and out-of-distribution generalization are becoming the challenges of the existing visual models.
Inspired by the strong inference ability of human-level agents, recent years have witnessed great effort in developing causal reasoning paradigms.
This paper aims to provide a comprehensive overview of this emerging field, attract attention, encourage discussions, bring to the forefront the urgency of developing novel causal reasoning methods.
arXiv Detail & Related papers (2022-04-26T02:22:28Z) - Towards Interpretable Deep Reinforcement Learning Models via Inverse
Reinforcement Learning [27.841725567976315]
We propose a novel framework utilizing Adversarial Inverse Reinforcement Learning.
This framework provides global explanations for decisions made by a Reinforcement Learning model.
We capture intuitive tendencies that the model follows by summarizing the model's decision-making process.
arXiv Detail & Related papers (2022-03-30T17:01:59Z) - Beyond Explaining: Opportunities and Challenges of XAI-Based Model
Improvement [75.00655434905417]
Explainable Artificial Intelligence (XAI) is an emerging research field bringing transparency to highly complex machine learning (ML) models.
This paper offers a comprehensive overview over techniques that apply XAI practically for improving various properties of ML models.
We show empirically through experiments on toy and realistic settings how explanations can help improve properties such as model generalization ability or reasoning.
arXiv Detail & Related papers (2022-03-15T15:44:28Z) - Algebraic Learning: Towards Interpretable Information Modeling [0.0]
This thesis addresses the issue of interpretability in general information modeling and endeavors to ease the problem from two scopes.
Firstly, a problem-oriented perspective is applied to incorporate knowledge into modeling practice, where interesting mathematical properties emerge naturally.
Secondly, given a trained model, various methods could be applied to extract further insights about the underlying system.
arXiv Detail & Related papers (2022-03-13T15:53:39Z) - EINNs: Epidemiologically-Informed Neural Networks [75.34199997857341]
We introduce a new class of physics-informed neural networks-EINN-crafted for epidemic forecasting.
We investigate how to leverage both the theoretical flexibility provided by mechanistic models as well as the data-driven expressability afforded by AI models.
arXiv Detail & Related papers (2022-02-21T18:59:03Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z) - Rethinking Generalization of Neural Models: A Named Entity Recognition
Case Study [81.11161697133095]
We take the NER task as a testbed to analyze the generalization behavior of existing models from different perspectives.
Experiments with in-depth analyses diagnose the bottleneck of existing neural NER models.
As a by-product of this paper, we have open-sourced a project that involves a comprehensive summary of recent NER papers.
arXiv Detail & Related papers (2020-01-12T04:33:53Z) - Deep neural network models for computational histopathology: A survey [1.2891210250935146]
deep learning has become the mainstream methodological choice for analyzing and interpreting cancer histology images.
In this paper, we present a comprehensive review of state-of-the-art deep learning approaches that have been used.
We highlight critical challenges and limitations with current deep learning approaches, along with possible avenues for future research.
arXiv Detail & Related papers (2019-12-28T01:04:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.