Brain Cortical Functional Gradients Predict Cortical Folding Patterns
via Attention Mesh Convolution
- URL: http://arxiv.org/abs/2205.10605v1
- Date: Sat, 21 May 2022 14:08:53 GMT
- Title: Brain Cortical Functional Gradients Predict Cortical Folding Patterns
via Attention Mesh Convolution
- Authors: Li Yang, Zhibin He, Changhe Li, Junwei Han, Dajiang Zhu, Tianming Liu,
Tuo Zhang
- Abstract summary: We develop a novel attention mesh convolution model to predict cortical gyro-sulcal segmentation maps on individual brains.
Experiments show that the prediction performance via our model outperforms other state-of-the-art models.
- Score: 51.333918985340425
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Since gyri and sulci, two basic anatomical building blocks of cortical
folding patterns, were suggested to bear different functional roles, a precise
mapping from brain function to gyro-sulcal patterns can provide profound
insights into both biological and artificial neural networks. However, there
lacks a generic theory and effective computational model so far, due to the
highly nonlinear relation between them, huge inter-individual variabilities and
a sophisticated description of brain function regions/networks distribution as
mosaics, such that spatial patterning of them has not been considered. we
adopted brain functional gradients derived from resting-state fMRI to embed the
"gradual" change of functional connectivity patterns, and developed a novel
attention mesh convolution model to predict cortical gyro-sulcal segmentation
maps on individual brains. The convolution on mesh considers the spatial
organization of functional gradients and folding patterns on a cortical sheet
and the newly designed channel attention block enhances the interpretability of
the contribution of different functional gradients to cortical folding
prediction. Experiments show that the prediction performance via our model
outperforms other state-of-the-art models. In addition, we found that the
dominant functional gradients contribute less to folding prediction. On the
activation maps of the last layer, some well-studied cortical landmarks are
found on the borders of, rather than within, the highly activated regions.
These results and findings suggest that a specifically designed artificial
neural network can improve the precision of the mapping between brain functions
and cortical folding patterns, and can provide valuable insight of brain
anatomy-function relation for neuroscience.
Related papers
- Interpretable Spatio-Temporal Embedding for Brain Structural-Effective Network with Ordinary Differential Equation [56.34634121544929]
In this study, we first construct the brain-effective network via the dynamic causal model.
We then introduce an interpretable graph learning framework termed Spatio-Temporal Embedding ODE (STE-ODE)
This framework incorporates specifically designed directed node embedding layers, aiming at capturing the dynamic interplay between structural and effective networks.
arXiv Detail & Related papers (2024-05-21T20:37:07Z) - Brain Diffusion for Visual Exploration: Cortical Discovery using Large
Scale Generative Models [6.866437017874623]
We introduce a data-driven approach in which we synthesize images predicted to activate a given brain region using paired natural images and fMRI recordings.
Our approach builds on recent generative methods by combining large-scale diffusion models with brain-guided image synthesis.
These results advance our understanding of the fine-grained functional organization of human visual cortex.
arXiv Detail & Related papers (2023-06-05T17:59:05Z) - Graph Neural Operators for Classification of Spatial Transcriptomics
Data [1.408706290287121]
We propose a study incorporating various graph neural network approaches to validate the efficacy of applying neural operators towards prediction of brain regions in mouse brain tissue samples.
We were able to achieve an F1 score of nearly 72% for the graph neural operator approach which outperformed all baseline and other graph network approaches.
arXiv Detail & Related papers (2023-02-01T18:32:06Z) - Contrastive Brain Network Learning via Hierarchical Signed Graph Pooling
Model [64.29487107585665]
Graph representation learning techniques on brain functional networks can facilitate the discovery of novel biomarkers for clinical phenotypes and neurodegenerative diseases.
Here, we propose an interpretable hierarchical signed graph representation learning model to extract graph-level representations from brain functional networks.
In order to further improve the model performance, we also propose a new strategy to augment functional brain network data for contrastive learning.
arXiv Detail & Related papers (2022-07-14T20:03:52Z) - Functional2Structural: Cross-Modality Brain Networks Representation
Learning [55.24969686433101]
Graph mining on brain networks may facilitate the discovery of novel biomarkers for clinical phenotypes and neurodegenerative diseases.
We propose a novel graph learning framework, known as Deep Signed Brain Networks (DSBN), with a signed graph encoder.
We validate our framework on clinical phenotype and neurodegenerative disease prediction tasks using two independent, publicly available datasets.
arXiv Detail & Related papers (2022-05-06T03:45:36Z) - An explainability framework for cortical surface-based deep learning [110.83289076967895]
We develop a framework for cortical surface-based deep learning.
First, we adapted a perturbation-based approach for use with surface data.
We show that our explainability framework is not only able to identify important features and their spatial location but that it is also reliable and valid.
arXiv Detail & Related papers (2022-03-15T23:16:49Z) - Contrastive Representation Learning for Whole Brain Cytoarchitectonic
Mapping in Histological Human Brain Sections [0.4588028371034407]
We propose a contrastive learning objective for encoding microscopic image patches into robust microstructural features.
We show that a model pre-trained using this learning task outperforms a model trained from scratch, as well as a model pre-trained on a recently proposed auxiliary task.
arXiv Detail & Related papers (2020-11-25T16:44:23Z) - Relaxing the Constraints on Predictive Coding Models [62.997667081978825]
Predictive coding is an influential theory of cortical function which posits that the principal computation the brain performs is the minimization of prediction errors.
Standard implementations of the algorithm still involve potentially neurally implausible features such as identical forward and backward weights, backward nonlinear derivatives, and 1-1 error unit connectivity.
In this paper, we show that these features are not integral to the algorithm and can be removed either directly or through learning additional sets of parameters with Hebbian update rules without noticeable harm to learning performance.
arXiv Detail & Related papers (2020-10-02T15:21:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.