Ensemble manifold based regularized multi-modal graph convolutional
network for cognitive ability prediction
- URL: http://arxiv.org/abs/2101.08316v1
- Date: Wed, 20 Jan 2021 20:53:07 GMT
- Title: Ensemble manifold based regularized multi-modal graph convolutional
network for cognitive ability prediction
- Authors: Gang Qu, Li Xiao, Wenxing Hu, Kun Zhang, Vince D. Calhoun, Yu-Ping
Wang
- Abstract summary: Multi-modal functional magnetic resonance imaging (fMRI) can be used to make predictions about individual behavioral and cognitive traits based on brain connectivity networks.
We propose an interpretable multi-modal graph convolutional network (MGCN) model, incorporating the fMRI time series and the functional connectivity (FC) between each pair of brain regions.
We validate our MGCN model on the Philadelphia Neurodevelopmental Cohort to predict individual wide range achievement test (WRAT) score.
- Score: 33.03449099154264
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Objective: Multi-modal functional magnetic resonance imaging (fMRI) can be
used to make predictions about individual behavioral and cognitive traits based
on brain connectivity networks. Methods: To take advantage of complementary
information from multi-modal fMRI, we propose an interpretable multi-modal
graph convolutional network (MGCN) model, incorporating the fMRI time series
and the functional connectivity (FC) between each pair of brain regions.
Specifically, our model learns a graph embedding from individual brain networks
derived from multi-modal data. A manifold-based regularization term is then
enforced to consider the relationships of subjects both within and between
modalities. Furthermore, we propose the gradient-weighted regression activation
mapping (Grad-RAM) and the edge mask learning to interpret the model, which is
used to identify significant cognition-related biomarkers. Results: We validate
our MGCN model on the Philadelphia Neurodevelopmental Cohort to predict
individual wide range achievement test (WRAT) score. Our model obtains superior
predictive performance over GCN with a single modality and other competing
approaches. The identified biomarkers are cross-validated from different
approaches. Conclusion and Significance: This paper develops a new
interpretable graph deep learning framework for cognitive ability prediction,
with the potential to overcome the limitations of several current data-fusion
models. The results demonstrate the power of MGCN in analyzing multi-modal fMRI
and discovering significant biomarkers for human brain studies.
Related papers
- Interpretable Spatio-Temporal Embedding for Brain Structural-Effective Network with Ordinary Differential Equation [56.34634121544929]
In this study, we first construct the brain-effective network via the dynamic causal model.
We then introduce an interpretable graph learning framework termed Spatio-Temporal Embedding ODE (STE-ODE)
This framework incorporates specifically designed directed node embedding layers, aiming at capturing the dynamic interplay between structural and effective networks.
arXiv Detail & Related papers (2024-05-21T20:37:07Z) - Multi-modal Graph Neural Network for Early Diagnosis of Alzheimer's
Disease from sMRI and PET Scans [11.420077093805382]
We propose to use graph neural networks (GNN) that are designed to deal with problems in non-Euclidean domains.
In this study, we demonstrate how brain networks can be created from sMRI or PET images.
We then present a multi-modal GNN framework where each modality has its own branch of GNN and a technique is proposed to combine the multi-modal data.
arXiv Detail & Related papers (2023-07-31T02:04:05Z) - MaxCorrMGNN: A Multi-Graph Neural Network Framework for Generalized
Multimodal Fusion of Medical Data for Outcome Prediction [3.2889220522843625]
We develop an innovative fusion approach called MaxCorr MGNN that models non-linear modality correlations within and across patients.
We then design, for the first time, a generalized multi-layered graph neural network (MGNN) for task-informed reasoning in multi-layered graphs.
We evaluate our model an outcome prediction task on a Tuberculosis dataset consistently outperforming several state-of-the-art neural, graph-based and traditional fusion techniques.
arXiv Detail & Related papers (2023-07-13T23:52:41Z) - Ambiguous Medical Image Segmentation using Diffusion Models [60.378180265885945]
We introduce a single diffusion model-based approach that produces multiple plausible outputs by learning a distribution over group insights.
Our proposed model generates a distribution of segmentation masks by leveraging the inherent sampling process of diffusion.
Comprehensive results show that our proposed approach outperforms existing state-of-the-art ambiguous segmentation networks.
arXiv Detail & Related papers (2023-04-10T17:58:22Z) - Multi-modal Dynamic Graph Network: Coupling Structural and Functional
Connectome for Disease Diagnosis and Classification [8.67028273829113]
We propose a Multi-modal Dynamic Graph Convolution Network (MDGCN) for structural and functional brain network learning.
Our method benefits from modeling inter-modal representations and relating attentive multi-model associations into dynamic graphs.
arXiv Detail & Related papers (2022-10-25T02:41:32Z) - DynDepNet: Learning Time-Varying Dependency Structures from fMRI Data
via Dynamic Graph Structure Learning [58.94034282469377]
We propose DynDepNet, a novel method for learning the optimal time-varying dependency structure of fMRI data induced by downstream prediction tasks.
Experiments on real-world fMRI datasets, for the task of sex classification, demonstrate that DynDepNet achieves state-of-the-art results.
arXiv Detail & Related papers (2022-09-27T16:32:11Z) - Self-supervised multimodal neuroimaging yields predictive
representations for a spectrum of Alzheimer's phenotypes [27.331511924585023]
This work presents a novel multi-scale coordinated framework for learning multiple representations from multimodal neuroimaging data.
We propose a general taxonomy of informative inductive biases to capture unique and joint information in multimodal self-supervised fusion.
We show that self-supervised models reveal disorder-relevant brain regions and multimodal links without access to the labels during pre-training.
arXiv Detail & Related papers (2022-09-07T01:37:19Z) - Contrastive Brain Network Learning via Hierarchical Signed Graph Pooling
Model [64.29487107585665]
Graph representation learning techniques on brain functional networks can facilitate the discovery of novel biomarkers for clinical phenotypes and neurodegenerative diseases.
Here, we propose an interpretable hierarchical signed graph representation learning model to extract graph-level representations from brain functional networks.
In order to further improve the model performance, we also propose a new strategy to augment functional brain network data for contrastive learning.
arXiv Detail & Related papers (2022-07-14T20:03:52Z) - Multi-modal learning for predicting the genotype of glioma [14.93152817415408]
The isocitrate dehydrogenase (IDH) gene mutation is an essential biomarker for the diagnosis and prognosis of glioma.
It is promising to better predict glioma genotype by integrating focal tumor image and geometric features with brain network features derived from MRI.
We propose a multi-modal learning framework using three separate encoders to extract features of focal tumor image, tumor geometrics and global brain networks.
arXiv Detail & Related papers (2022-03-21T10:20:04Z) - G-MIND: An End-to-End Multimodal Imaging-Genetics Framework for
Biomarker Identification and Disease Classification [49.53651166356737]
We propose a novel deep neural network architecture to integrate imaging and genetics data, as guided by diagnosis, that provides interpretable biomarkers.
We have evaluated our model on a population study of schizophrenia that includes two functional MRI (fMRI) paradigms and Single Nucleotide Polymorphism (SNP) data.
arXiv Detail & Related papers (2021-01-27T19:28:04Z) - Modeling Shared Responses in Neuroimaging Studies through MultiView ICA [94.31804763196116]
Group studies involving large cohorts of subjects are important to draw general conclusions about brain functional organization.
We propose a novel MultiView Independent Component Analysis model for group studies, where data from each subject are modeled as a linear combination of shared independent sources plus noise.
We demonstrate the usefulness of our approach first on fMRI data, where our model demonstrates improved sensitivity in identifying common sources among subjects.
arXiv Detail & Related papers (2020-06-11T17:29:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.