NeuroExplainer: Fine-Grained Attention Decoding to Uncover Cortical
Development Patterns of Preterm Infants
- URL: http://arxiv.org/abs/2301.00815v4
- Date: Thu, 25 May 2023 11:04:57 GMT
- Title: NeuroExplainer: Fine-Grained Attention Decoding to Uncover Cortical
Development Patterns of Preterm Infants
- Authors: Chenyu Xue, Fan Wang, Yuanzhuo Zhu, Hui Li, Deyu Meng, Dinggang Shen,
and Chunfeng Lian
- Abstract summary: We propose an explainable geometric deep network dubbed NeuroExplainer.
NeuroExplainer is used to uncover altered infant cortical development patterns associated with preterm birth.
- Score: 73.85768093666582
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Deploying reliable deep learning techniques in interdisciplinary applications
needs learned models to output accurate and (even more importantly) explainable
predictions. Existing approaches typically explicate network outputs in a
post-hoc fashion, under an implicit assumption that faithful explanations come
from accurate predictions/classifications. We have an opposite claim that
explanations boost (or even determine) classification. That is, end-to-end
learning of explanation factors to augment discriminative representation
extraction could be a more intuitive strategy to inversely assure fine-grained
explainability, e.g., in those neuroimaging and neuroscience studies with
high-dimensional data containing noisy, redundant, and task-irrelevant
information. In this paper, we propose such an explainable geometric deep
network dubbed as NeuroExplainer, with applications to uncover altered infant
cortical development patterns associated with preterm birth. Given fundamental
cortical attributes as network input, our NeuroExplainer adopts a hierarchical
attention-decoding framework to learn fine-grained attentions and respective
discriminative representations to accurately recognize preterm infants from
term-born infants at term-equivalent age. NeuroExplainer learns the
hierarchical attention-decoding modules under subject-level weak supervision
coupled with targeted regularizers deduced from domain knowledge regarding
brain development. These prior-guided constraints implicitly maximizes the
explainability metrics (i.e., fidelity, sparsity, and stability) in network
training, driving the learned network to output detailed explanations and
accurate classifications. Experimental results on the public dHCP benchmark
suggest that NeuroExplainer led to quantitatively reliable explanation results
that are qualitatively consistent with representative neuroimaging studies.
Related papers
- On the Value of Labeled Data and Symbolic Methods for Hidden Neuron Activation Analysis [1.55858752644861]
State of the art indicates that hidden node activations can, in some cases, be interpretable in a way that makes sense to humans.
We introduce a novel model-agnostic post-hoc Explainable AI method demonstrating that it provides meaningful interpretations.
arXiv Detail & Related papers (2024-04-21T07:57:45Z) - Towards Generating Informative Textual Description for Neurons in
Language Models [6.884227665279812]
We propose a framework that ties textual descriptions to neurons.
In particular, our experiment shows that the proposed approach achieves 75% precision@2, and 50% recall@2
arXiv Detail & Related papers (2024-01-30T04:06:25Z) - Explainable Brain Age Prediction using coVariance Neural Networks [94.81523881951397]
We propose an explanation-driven and anatomically interpretable framework for brain age prediction using cortical thickness features.
Specifically, our brain age prediction framework extends beyond the coarse metric of brain age gap in Alzheimer's disease (AD)
We make two important observations: VNNs can assign anatomical interpretability to elevated brain age gap in AD by identifying contributing brain regions.
arXiv Detail & Related papers (2023-05-27T22:28:25Z) - Autism spectrum disorder classification based on interpersonal neural
synchrony: Can classification be improved by dyadic neural biomarkers using
unsupervised graph representation learning? [0.0]
We introduce unsupervised graph representations that explicitly map the neural mechanisms of a core aspect of ASD.
First results from functional-near infrared spectroscopy data indicate potential predictive capacities of a task-agnostic, interpretable graph representation.
arXiv Detail & Related papers (2022-08-17T07:10:57Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - The Causal Neural Connection: Expressiveness, Learnability, and
Inference [125.57815987218756]
An object called structural causal model (SCM) represents a collection of mechanisms and sources of random variation of the system under investigation.
In this paper, we show that the causal hierarchy theorem (Thm. 1, Bareinboim et al., 2020) still holds for neural models.
We introduce a special type of SCM called a neural causal model (NCM), and formalize a new type of inductive bias to encode structural constraints necessary for performing causal inferences.
arXiv Detail & Related papers (2021-07-02T01:55:18Z) - Object-based attention for spatio-temporal reasoning: Outperforming
neuro-symbolic models with flexible distributed architectures [15.946511512356878]
We show that a fully-learned neural network with the right inductive biases can perform substantially better than all previous neural-symbolic models.
Our model makes critical use of both self-attention and learned "soft" object-centric representations.
arXiv Detail & Related papers (2020-12-15T18:57:40Z) - Gradient Starvation: A Learning Proclivity in Neural Networks [97.02382916372594]
Gradient Starvation arises when cross-entropy loss is minimized by capturing only a subset of features relevant for the task.
This work provides a theoretical explanation for the emergence of such feature imbalance in neural networks.
arXiv Detail & Related papers (2020-11-18T18:52:08Z) - Visual Explanation for Identification of the Brain Bases for Dyslexia on
fMRI Data [13.701992590330395]
We use network visualization techniques to show that, using such techniques in convolutional neural network layers responsible for learning high-level features, we are able to provide meaningful images for expert-backed insights into the condition being classified.
Our results show not only accurate classification of developmental dyslexia from the brain imaging alone, but also provide automatic visualizations of the features involved that match contemporary neuroscientific knowledge.
arXiv Detail & Related papers (2020-07-17T22:11:30Z) - Towards a Neural Model for Serial Order in Frontal Cortex: a Brain
Theory from Memory Development to Higher-Level Cognition [53.816853325427424]
We propose that the immature prefrontal cortex (PFC) use its primary functionality of detecting hierarchical patterns in temporal signals.
Our hypothesis is that the PFC detects the hierarchical structure in temporal sequences in the form of ordinal patterns and use them to index information hierarchically in different parts of the brain.
By doing so, it gives the tools to the language-ready brain for manipulating abstract knowledge and planning temporally ordered information.
arXiv Detail & Related papers (2020-05-22T14:29:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.