Functional2Structural: Cross-Modality Brain Networks Representation
Learning
- URL: http://arxiv.org/abs/2205.07854v1
- Date: Fri, 6 May 2022 03:45:36 GMT
- Title: Functional2Structural: Cross-Modality Brain Networks Representation
Learning
- Authors: Haoteng Tang, Xiyao Fu, Lei Guo, Yalin Wang, Scott Mackin, Olusola
Ajilore, Alex Leow, Paul Thompson, Heng Huang, Liang Zhan
- Abstract summary: Graph mining on brain networks may facilitate the discovery of novel biomarkers for clinical phenotypes and neurodegenerative diseases.
We propose a novel graph learning framework, known as Deep Signed Brain Networks (DSBN), with a signed graph encoder.
We validate our framework on clinical phenotype and neurodegenerative disease prediction tasks using two independent, publicly available datasets.
- Score: 55.24969686433101
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: MRI-based modeling of brain networks has been widely used to understand
functional and structural interactions and connections among brain regions, and
factors that affect them, such as brain development and disease. Graph mining
on brain networks may facilitate the discovery of novel biomarkers for clinical
phenotypes and neurodegenerative diseases. Since brain networks derived from
functional and structural MRI describe the brain topology from different
perspectives, exploring a representation that combines these cross-modality
brain networks is non-trivial. Most current studies aim to extract a fused
representation of the two types of brain network by projecting the structural
network to the functional counterpart. Since the functional network is dynamic
and the structural network is static, mapping a static object to a dynamic
object is suboptimal. However, mapping in the opposite direction is not
feasible due to the non-negativity requirement of current graph learning
techniques. Here, we propose a novel graph learning framework, known as Deep
Signed Brain Networks (DSBN), with a signed graph encoder that, from an
opposite perspective, learns the cross-modality representations by projecting
the functional network to the structural counterpart. We validate our framework
on clinical phenotype and neurodegenerative disease prediction tasks using two
independent, publicly available datasets (HCP and OASIS). The experimental
results clearly demonstrate the advantages of our model compared to several
state-of-the-art methods.
Related papers
- Graph Neural Networks for Brain Graph Learning: A Survey [53.74244221027981]
Graph neural networks (GNNs) have demonstrated a significant advantage in mining graph-structured data.
GNNs to learn brain graph representations for brain disorder analysis has recently gained increasing attention.
In this paper, we aim to bridge this gap by reviewing brain graph learning works that utilize GNNs.
arXiv Detail & Related papers (2024-06-01T02:47:39Z) - Interpretable Spatio-Temporal Embedding for Brain Structural-Effective Network with Ordinary Differential Equation [56.34634121544929]
In this study, we first construct the brain-effective network via the dynamic causal model.
We then introduce an interpretable graph learning framework termed Spatio-Temporal Embedding ODE (STE-ODE)
This framework incorporates specifically designed directed node embedding layers, aiming at capturing the dynamic interplay between structural and effective networks.
arXiv Detail & Related papers (2024-05-21T20:37:07Z) - Leveraging Brain Modularity Prior for Interpretable Representation
Learning of fMRI [38.236414924531196]
Resting-state functional magnetic resonance imaging (rs-fMRI) can reflect spontaneous neural activities in brain.
Previous studies propose to extract fMRI representations through diverse machine/deep learning methods for subsequent analysis.
This paper proposes a Brain Modularity-constrained dynamic Representation learning (BMR) framework for interpretable fMRI analysis.
arXiv Detail & Related papers (2023-06-24T23:45:47Z) - Contrastive Brain Network Learning via Hierarchical Signed Graph Pooling
Model [64.29487107585665]
Graph representation learning techniques on brain functional networks can facilitate the discovery of novel biomarkers for clinical phenotypes and neurodegenerative diseases.
Here, we propose an interpretable hierarchical signed graph representation learning model to extract graph-level representations from brain functional networks.
In order to further improve the model performance, we also propose a new strategy to augment functional brain network data for contrastive learning.
arXiv Detail & Related papers (2022-07-14T20:03:52Z) - Interpretable Graph Neural Networks for Connectome-Based Brain Disorder
Analysis [31.281194583900998]
We propose an interpretable framework to analyze disorder-specific Regions of Interest (ROIs) and prominent connections.
The proposed framework consists of two modules: a brain-network-oriented backbone model for disease prediction and a globally shared explanation generator.
arXiv Detail & Related papers (2022-06-30T08:02:05Z) - Brain Cortical Functional Gradients Predict Cortical Folding Patterns
via Attention Mesh Convolution [51.333918985340425]
We develop a novel attention mesh convolution model to predict cortical gyro-sulcal segmentation maps on individual brains.
Experiments show that the prediction performance via our model outperforms other state-of-the-art models.
arXiv Detail & Related papers (2022-05-21T14:08:53Z) - Deep Reinforcement Learning Guided Graph Neural Networks for Brain
Network Analysis [61.53545734991802]
We propose a novel brain network representation framework, namely BN-GNN, which searches for the optimal GNN architecture for each brain network.
Our proposed BN-GNN improves the performance of traditional GNNs on different brain network analysis tasks.
arXiv Detail & Related papers (2022-03-18T07:05:27Z) - Joint Embedding of Structural and Functional Brain Networks with Graph
Neural Networks for Mental Illness Diagnosis [17.48272758284748]
Graph Neural Networks (GNNs) have become a de facto model for analyzing graph-structured data.
We develop a novel multiview GNN for multimodal brain networks.
In particular, we regard each modality as a view for brain networks and employ contrastive learning for multimodal fusion.
arXiv Detail & Related papers (2021-07-07T13:49:57Z) - Deep Representation Learning For Multimodal Brain Networks [9.567489601729328]
We propose a novel end-to-end deep graph representation learning (Deep Multimodal Brain Networks - DMBN) to fuse multimodal brain networks.
The higher-order network mappings from brain structural networks to functional networks are learned in the node domain.
The experimental results show the superiority of the proposed method over some other state-of-the-art deep brain network models.
arXiv Detail & Related papers (2020-07-19T20:32:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.