Transferability of coVariance Neural Networks and Application to
Interpretable Brain Age Prediction using Anatomical Features
- URL: http://arxiv.org/abs/2305.01807v2
- Date: Thu, 4 May 2023 23:58:12 GMT
- Title: Transferability of coVariance Neural Networks and Application to
Interpretable Brain Age Prediction using Anatomical Features
- Authors: Saurabh Sihag, Gonzalo Mateos, Corey T. McMillan, Alejandro Ribeiro
- Abstract summary: Graph convolutional networks (GCN) leverage topology-driven graph convolutional operations to combine information across the graph for inference tasks.
We have studied GCNs with covariance matrices as graphs in the form of coVariance neural networks (VNNs)
VNNs inherit the scale-free data processing architecture from GCNs and here, we show that VNNs exhibit transferability of performance over datasets whose covariance matrices converge to a limit object.
- Score: 119.45320143101381
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph convolutional networks (GCN) leverage topology-driven graph
convolutional operations to combine information across the graph for inference
tasks. In our recent work, we have studied GCNs with covariance matrices as
graphs in the form of coVariance neural networks (VNNs) that draw similarities
with traditional PCA-driven data analysis approaches while offering significant
advantages over them. In this paper, we first focus on theoretically
characterizing the transferability of VNNs. The notion of transferability is
motivated from the intuitive expectation that learning models could generalize
to "compatible" datasets (possibly of different dimensionalities) with minimal
effort. VNNs inherit the scale-free data processing architecture from GCNs and
here, we show that VNNs exhibit transferability of performance over datasets
whose covariance matrices converge to a limit object. Multi-scale neuroimaging
datasets enable the study of the brain at multiple scales and hence, can
validate the theoretical results on the transferability of VNNs. To gauge the
advantages offered by VNNs in neuroimaging data analysis, we focus on the task
of "brain age" prediction using cortical thickness features. In clinical
neuroscience, there has been an increased interest in machine learning
algorithms which provide estimates of "brain age" that deviate from
chronological age. We leverage the architecture of VNNs to extend beyond the
coarse metric of brain age gap in Alzheimer's disease (AD) and make two
important observations: (i) VNNs can assign anatomical interpretability to
elevated brain age gap in AD, and (ii) the interpretability offered by VNNs is
contingent on their ability to exploit specific principal components of the
anatomical covariance matrix. We further leverage the transferability of VNNs
to cross validate the above observations across different datasets.
Related papers
- Cognitive Networks and Performance Drive fMRI-Based State Classification Using DNN Models [0.0]
We employ two structurally different and complementary DNN-based models to classify individual cognitive states.
We show that despite the architectural differences, both models consistently produce a robust relationship between prediction accuracy and individual cognitive performance.
arXiv Detail & Related papers (2024-08-14T15:25:51Z) - Towards a Foundation Model for Brain Age Prediction using coVariance
Neural Networks [102.75954614946258]
Increasing brain age with respect to chronological age can reflect increased vulnerability to neurodegeneration and cognitive decline.
NeuroVNN is pre-trained as a regression model on healthy population to predict chronological age.
NeuroVNN adds anatomical interpretability to brain age and has a scale-free' characteristic that allows its transference to datasets curated according to any arbitrary brain atlas.
arXiv Detail & Related papers (2024-02-12T14:46:31Z) - Information Flow in Graph Neural Networks: A Clinical Triage Use Case [49.86931948849343]
Graph Neural Networks (GNNs) have gained popularity in healthcare and other domains due to their ability to process multi-modal and multi-relational graphs.
We investigate how the flow of embedding information within GNNs affects the prediction of links in Knowledge Graphs (KGs)
Our results demonstrate that incorporating domain knowledge into the GNN connectivity leads to better performance than using the same connectivity as the KG or allowing unconstrained embedding propagation.
arXiv Detail & Related papers (2023-09-12T09:18:12Z) - Predicting Brain Age using Transferable coVariance Neural Networks [119.45320143101381]
We have recently studied covariance neural networks (VNNs) that operate on sample covariance matrices.
In this paper, we demonstrate the utility of VNNs in inferring brain age using cortical thickness data.
Our results show that VNNs exhibit multi-scale and multi-site transferability for inferring brain age
In the context of brain age in Alzheimer's disease (AD), our experiments show that i) VNN outputs are interpretable as brain age predicted using VNNs is significantly elevated for AD with respect to healthy subjects.
arXiv Detail & Related papers (2022-10-28T18:58:34Z) - coVariance Neural Networks [119.45320143101381]
Graph neural networks (GNN) are an effective framework that exploit inter-relationships within graph-structured data for learning.
We propose a GNN architecture, called coVariance neural network (VNN), that operates on sample covariance matrices as graphs.
We show that VNN performance is indeed more stable than PCA-based statistical approaches.
arXiv Detail & Related papers (2022-05-31T15:04:43Z) - Knowledge Enhanced Neural Networks for relational domains [83.9217787335878]
We focus on a specific method, KENN, a Neural-Symbolic architecture that injects prior logical knowledge into a neural network.
In this paper, we propose an extension of KENN for relational data.
arXiv Detail & Related papers (2022-05-31T13:00:34Z) - Modeling Spatio-Temporal Dynamics in Brain Networks: A Comparison of
Graph Neural Network Architectures [0.5033155053523041]
Graph neural networks (GNNs) provide a possibility to interpret new structured graph signals.
We show that by learning localized functional interactions on the substrate, GNN based approaches are able to robustly scale to large network studies.
arXiv Detail & Related papers (2021-12-08T12:57:13Z) - A Graph Neural Network Framework for Causal Inference in Brain Networks [0.3392372796177108]
A central question in neuroscience is how self-organizing dynamic interactions in the brain emerge on their relatively static backbone.
We present a graph neural network (GNN) framework to describe functional interactions based on structural anatomical layout.
We show that GNNs are able to capture long-term dependencies in data and also scale up to the analysis of large-scale networks.
arXiv Detail & Related papers (2020-10-14T15:01:21Z) - Comparing SNNs and RNNs on Neuromorphic Vision Datasets: Similarities
and Differences [36.82069150045153]
Spiking neural networks (SNNs) and recurrent neural networks (RNNs) are benchmarked on neuromorphic data.
In this work, we make a systematic study to compare SNNs and RNNs on neuromorphic data.
arXiv Detail & Related papers (2020-05-02T10:19:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.