NeuroPath: A Neural Pathway Transformer for Joining the Dots of Human Connectomes
- URL: http://arxiv.org/abs/2409.17510v3
- Date: Sun, 27 Oct 2024 03:25:05 GMT
- Title: NeuroPath: A Neural Pathway Transformer for Joining the Dots of Human Connectomes
- Authors: Ziquan Wei, Tingting Dan, Jiaqi Ding, Guorong Wu,
- Abstract summary: We introduce the concept of topological detour to characterize how a ubiquitous instance of FC is supported by neural pathways (detour) physically wired by SC.
In the clich'e of machine learning, the multi-hop detour pathway underlying SC-FC coupling allows us to devise a novel multi-head self-attention mechanism.
We propose a biological-inspired deep model, coined as NeuroPath, to find putative connectomic feature representations from the unprecedented amount of neuroimages.
- Score: 4.362614418491178
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Although modern imaging technologies allow us to study connectivity between two distinct brain regions in-vivo, an in-depth understanding of how anatomical structure supports brain function and how spontaneous functional fluctuations emerge remarkable cognition is still elusive. Meanwhile, tremendous efforts have been made in the realm of machine learning to establish the nonlinear mapping between neuroimaging data and phenotypic traits. However, the absence of neuroscience insight in the current approaches poses significant challenges in understanding cognitive behavior from transient neural activities. To address this challenge, we put the spotlight on the coupling mechanism of structural connectivity (SC) and functional connectivity (FC) by formulating such network neuroscience question into an expressive graph representation learning problem for high-order topology. Specifically, we introduce the concept of topological detour to characterize how a ubiquitous instance of FC (direct link) is supported by neural pathways (detour) physically wired by SC, which forms a cyclic loop interacted by brain structure and function. In the clich\'e of machine learning, the multi-hop detour pathway underlying SC-FC coupling allows us to devise a novel multi-head self-attention mechanism within Transformer to capture multi-modal feature representation from paired graphs of SC and FC. Taken together, we propose a biological-inspired deep model, coined as NeuroPath, to find putative connectomic feature representations from the unprecedented amount of neuroimages, which can be plugged into various downstream applications such as task recognition and disease diagnosis. We have evaluated NeuroPath on large-scale public datasets including HCP and UK Biobank under supervised and zero-shot learning, where the state-of-the-art performance by our NeuroPath indicates great potential in network neuroscience.
Related papers
- Retinal Vessel Segmentation via Neuron Programming [17.609169389489633]
This paper introduces a novel approach to neural network design, termed neuron programming'', to enhance a network's representation ability at the neuronal level.
Comprehensive experiments validate that neuron programming can achieve competitive performance in retinal blood segmentation.
arXiv Detail & Related papers (2024-11-17T16:03:30Z) - A Fuzzy-based Approach to Predict Human Interaction by Functional Near-Infrared Spectroscopy [25.185426359719454]
The paper introduces a Fuzzy-based Attention (Fuzzy Attention Layer) mechanism, a novel computational approach to interpretability and efficacy of neural models in psychological research.
By leveraging fuzzy logic, the Fuzzy Attention Layer is capable of learning and identifying interpretable patterns of neural activity.
arXiv Detail & Related papers (2024-09-26T09:20:12Z) - Contrastive Learning in Memristor-based Neuromorphic Systems [55.11642177631929]
Spiking neural networks have become an important family of neuron-based models that sidestep many of the key limitations facing modern-day backpropagation-trained deep networks.
In this work, we design and investigate a proof-of-concept instantiation of contrastive-signal-dependent plasticity (CSDP), a neuromorphic form of forward-forward-based, backpropagation-free learning.
arXiv Detail & Related papers (2024-09-17T04:48:45Z) - Exploring neural oscillations during speech perception via surrogate gradient spiking neural networks [59.38765771221084]
We present a physiologically inspired speech recognition architecture compatible and scalable with deep learning frameworks.
We show end-to-end gradient descent training leads to the emergence of neural oscillations in the central spiking neural network.
Our findings highlight the crucial inhibitory role of feedback mechanisms, such as spike frequency adaptation and recurrent connections, in regulating and synchronising neural activity to improve recognition performance.
arXiv Detail & Related papers (2024-04-22T09:40:07Z) - Exploring General Intelligence via Gated Graph Transformer in Functional
Connectivity Studies [39.82681427764513]
Gated Graph Transformer (GGT) framework designed to predict cognitive metrics based on Functional Connectivity (FC)
Empirical validation on the Philadelphia Neurodevelopmental Cohort (PNC) underscores the superior predictive prowess of our model.
arXiv Detail & Related papers (2024-01-18T19:28:26Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Constraints on the design of neuromorphic circuits set by the properties
of neural population codes [61.15277741147157]
In the brain, information is encoded, transmitted and used to inform behaviour.
Neuromorphic circuits need to encode information in a way compatible to that used by populations of neuron in the brain.
arXiv Detail & Related papers (2022-12-08T15:16:04Z) - Interpretable Graph Neural Networks for Connectome-Based Brain Disorder
Analysis [31.281194583900998]
We propose an interpretable framework to analyze disorder-specific Regions of Interest (ROIs) and prominent connections.
The proposed framework consists of two modules: a brain-network-oriented backbone model for disease prediction and a globally shared explanation generator.
arXiv Detail & Related papers (2022-06-30T08:02:05Z) - Functional2Structural: Cross-Modality Brain Networks Representation
Learning [55.24969686433101]
Graph mining on brain networks may facilitate the discovery of novel biomarkers for clinical phenotypes and neurodegenerative diseases.
We propose a novel graph learning framework, known as Deep Signed Brain Networks (DSBN), with a signed graph encoder.
We validate our framework on clinical phenotype and neurodegenerative disease prediction tasks using two independent, publicly available datasets.
arXiv Detail & Related papers (2022-05-06T03:45:36Z) - The distribution of inhibitory neurons in the C. elegans connectome
facilitates self-optimization of coordinated neural activity [78.15296214629433]
The nervous system of the nematode Caenorhabditis elegans exhibits remarkable complexity despite the worm's small size.
A general challenge is to better understand the relationship between neural organization and neural activity at the system level.
We implemented an abstract simulation model of the C. elegans connectome that approximates the neurotransmitter identity of each neuron.
arXiv Detail & Related papers (2020-10-28T23:11:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.