Multi-modal Dynamic Graph Network: Coupling Structural and Functional
Connectome for Disease Diagnosis and Classification
- URL: http://arxiv.org/abs/2210.13721v1
- Date: Tue, 25 Oct 2022 02:41:32 GMT
- Title: Multi-modal Dynamic Graph Network: Coupling Structural and Functional
Connectome for Disease Diagnosis and Classification
- Authors: Yanwu Yang, Xutao Guo, Zhikai Chang, Chenfei Ye, Yang Xiang, Ting Ma
- Abstract summary: We propose a Multi-modal Dynamic Graph Convolution Network (MDGCN) for structural and functional brain network learning.
Our method benefits from modeling inter-modal representations and relating attentive multi-model associations into dynamic graphs.
- Score: 8.67028273829113
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-modal neuroimaging technology has greatlly facilitated the efficiency
and diagnosis accuracy, which provides complementary information in discovering
objective disease biomarkers. Conventional deep learning methods, e.g.
convolutional neural networks, overlook relationships between nodes and fail to
capture topological properties in graphs. Graph neural networks have been
proven to be of great importance in modeling brain connectome networks and
relating disease-specific patterns. However, most existing graph methods
explicitly require known graph structures, which are not available in the
sophisticated brain system. Especially in heterogeneous multi-modal brain
networks, there exists a great challenge to model interactions among brain
regions in consideration of inter-modal dependencies. In this study, we propose
a Multi-modal Dynamic Graph Convolution Network (MDGCN) for structural and
functional brain network learning. Our method benefits from modeling
inter-modal representations and relating attentive multi-model associations
into dynamic graphs with a compositional correspondence matrix. Moreover, a
bilateral graph convolution layer is proposed to aggregate multi-modal
representations in terms of multi-modal associations. Extensive experiments on
three datasets demonstrate the superiority of our proposed method in terms of
disease classification, with the accuracy of 90.4%, 85.9% and 98.3% in
predicting Mild Cognitive Impairment (MCI), Parkinson's disease (PD), and
schizophrenia (SCHZ) respectively. Furthermore, our statistical evaluations on
the correspondence matrix exhibit a high correspondence with previous evidence
of biomarkers.
Related papers
- GTP-4o: Modality-prompted Heterogeneous Graph Learning for Omni-modal Biomedical Representation [68.63955715643974]
Modality-prompted Heterogeneous Graph for Omnimodal Learning (GTP-4o)
We propose an innovative Modality-prompted Heterogeneous Graph for Omnimodal Learning (GTP-4o)
arXiv Detail & Related papers (2024-07-08T01:06:13Z) - Interpretable Spatio-Temporal Embedding for Brain Structural-Effective Network with Ordinary Differential Equation [56.34634121544929]
In this study, we first construct the brain-effective network via the dynamic causal model.
We then introduce an interpretable graph learning framework termed Spatio-Temporal Embedding ODE (STE-ODE)
This framework incorporates specifically designed directed node embedding layers, aiming at capturing the dynamic interplay between structural and effective networks.
arXiv Detail & Related papers (2024-05-21T20:37:07Z) - Classification of developmental and brain disorders via graph
convolutional aggregation [6.6356049194991815]
We introduce an aggregator normalization graph convolutional network by leveraging aggregation in graph sampling.
The proposed model learns discriminative graph node representations by incorporating both imaging and non-imaging features into the graph nodes and edges.
We benchmark our model against several recent baseline methods on two large datasets, Autism Brain Imaging Data Exchange (ABIDE) and Alzheimer's Disease Neuroimaging Initiative (ADNI)
arXiv Detail & Related papers (2023-11-13T14:36:29Z) - MaxCorrMGNN: A Multi-Graph Neural Network Framework for Generalized
Multimodal Fusion of Medical Data for Outcome Prediction [3.2889220522843625]
We develop an innovative fusion approach called MaxCorr MGNN that models non-linear modality correlations within and across patients.
We then design, for the first time, a generalized multi-layered graph neural network (MGNN) for task-informed reasoning in multi-layered graphs.
We evaluate our model an outcome prediction task on a Tuberculosis dataset consistently outperforming several state-of-the-art neural, graph-based and traditional fusion techniques.
arXiv Detail & Related papers (2023-07-13T23:52:41Z) - Multi-modal Multi-kernel Graph Learning for Autism Prediction and
Biomarker Discovery [29.790200009136825]
We propose a novel method to offset the negative impact between modalities in the process of multi-modal integration and extract heterogeneous information from graphs.
Our method is evaluated on the benchmark Autism Brain Imaging Data Exchange (ABIDE) dataset and outperforms the state-of-the-art methods.
In addition, discriminative brain regions associated with autism are identified by our model, providing guidance for the study of autism pathology.
arXiv Detail & Related papers (2023-03-03T07:09:17Z) - DBGDGM: Dynamic Brain Graph Deep Generative Model [63.23390833353625]
Graphs are a natural representation of brain activity derived from functional magnetic imaging (fMRI) data.
It is well known that clusters of anatomical brain regions, known as functional connectivity networks (FCNs), encode temporal relationships which can serve as useful biomarkers for understanding brain function and dysfunction.
Previous works, however, ignore the temporal dynamics of the brain and focus on static graphs.
We propose a dynamic brain graph deep generative model (DBGDGM) which simultaneously clusters brain regions into temporally evolving communities and learns dynamic unsupervised node embeddings.
arXiv Detail & Related papers (2023-01-26T20:45:30Z) - Contrastive Brain Network Learning via Hierarchical Signed Graph Pooling
Model [64.29487107585665]
Graph representation learning techniques on brain functional networks can facilitate the discovery of novel biomarkers for clinical phenotypes and neurodegenerative diseases.
Here, we propose an interpretable hierarchical signed graph representation learning model to extract graph-level representations from brain functional networks.
In order to further improve the model performance, we also propose a new strategy to augment functional brain network data for contrastive learning.
arXiv Detail & Related papers (2022-07-14T20:03:52Z) - Functional2Structural: Cross-Modality Brain Networks Representation
Learning [55.24969686433101]
Graph mining on brain networks may facilitate the discovery of novel biomarkers for clinical phenotypes and neurodegenerative diseases.
We propose a novel graph learning framework, known as Deep Signed Brain Networks (DSBN), with a signed graph encoder.
We validate our framework on clinical phenotype and neurodegenerative disease prediction tasks using two independent, publicly available datasets.
arXiv Detail & Related papers (2022-05-06T03:45:36Z) - Self-Supervised Graph Representation Learning for Neuronal Morphologies [75.38832711445421]
We present GraphDINO, a data-driven approach to learn low-dimensional representations of 3D neuronal morphologies from unlabeled datasets.
We show, in two different species and across multiple brain areas, that this method yields morphological cell type clusterings on par with manual feature-based classification by experts.
Our method could potentially enable data-driven discovery of novel morphological features and cell types in large-scale datasets.
arXiv Detail & Related papers (2021-12-23T12:17:47Z) - Ensemble manifold based regularized multi-modal graph convolutional
network for cognitive ability prediction [33.03449099154264]
Multi-modal functional magnetic resonance imaging (fMRI) can be used to make predictions about individual behavioral and cognitive traits based on brain connectivity networks.
We propose an interpretable multi-modal graph convolutional network (MGCN) model, incorporating the fMRI time series and the functional connectivity (FC) between each pair of brain regions.
We validate our MGCN model on the Philadelphia Neurodevelopmental Cohort to predict individual wide range achievement test (WRAT) score.
arXiv Detail & Related papers (2021-01-20T20:53:07Z) - Learning Multi-resolution Graph Edge Embedding for Discovering Brain Network Dysfunction in Neurological Disorders [10.12649945620901]
We propose Multi-resolution Edge Network (MENET) to detect disease-specific connectomic benchmarks.
MENET accurately predicts diagnostic labels and identifies brain connectivities highly associated with neurological disorders.
arXiv Detail & Related papers (2019-12-03T03:46:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.