Large Connectome Model: An fMRI Foundation Model of Brain Connectomes Empowered by Brain-Environment Interaction in Multitask Learning Landscape
- URL: http://arxiv.org/abs/2510.18910v1
- Date: Tue, 21 Oct 2025 03:50:51 GMT
- Title: Large Connectome Model: An fMRI Foundation Model of Brain Connectomes Empowered by Brain-Environment Interaction in Multitask Learning Landscape
- Authors: Ziquan Wei, Tingting Dan, Guorong Wu,
- Abstract summary: A reliable foundation model of functional neuroimages is critical to promote clinical applications.<n>We form the brain modeling as a multitask learning by capitalizing on rich environmental variables and demographic data.<n>We have evaluated our foundation model on a variety of applications, including sex prediction, human behavior recognition, and disease early diagnosis of Autism, Parkinson's disease, Alzheimer's disease, and Schizophrenia
- Score: 12.920888696520366
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A reliable foundation model of functional neuroimages is critical to promote clinical applications where the performance of current AI models is significantly impeded by a limited sample size. To that end, tremendous efforts have been made to pretraining large models on extensive unlabeled fMRI data using scalable self-supervised learning. Since self-supervision is not necessarily aligned with the brain-to-outcome relationship, most foundation models are suboptimal to the downstream task, such as predicting disease outcomes. By capitalizing on rich environmental variables and demographic data along with an unprecedented amount of functional neuroimages, we form the brain modeling as a multitask learning and present a scalable model architecture for (i) multitask pretraining by tokenizing multiple brain-environment interactions (BEI) and (ii) semi-supervised finetuning by assigning pseudo-labels of pretrained BEI. We have evaluated our foundation model on a variety of applications, including sex prediction, human behavior recognition, and disease early diagnosis of Autism, Parkinson's disease, Alzheimer's disease, and {Schizophrenia}, where promising results indicate the great potential to facilitate current neuroimaging applications in clinical routines.
Related papers
- Towards a general-purpose foundation model for fMRI analysis [58.06455456423138]
We introduce NeuroSTORM, a framework that learns from 4D fMRI volumes and enables efficient knowledge transfer across diverse applications.<n>NeuroSTORM is pre-trained on 28.65 million fMRI frames (>9,000 hours) from over 50,000 subjects across multiple centers and ages 5 to 100.<n>It outperforms existing methods across five tasks: age/gender prediction, phenotype prediction, disease diagnosis, fMRI-to-image retrieval, and task-based fMRI.
arXiv Detail & Related papers (2025-06-11T23:51:01Z) - Brain Foundation Models with Hypergraph Dynamic Adapter for Brain Disease Analysis [18.02038938366483]
Brain diseases, such as Alzheimer's disease and brain tumors, present profound challenges due to their complexity and societal impact.<n>Recent advancements in brain foundation models have shown significant promise in addressing a range of brain-related tasks.<n>We propose SAM-Brain3D, a brain-specific foundation model trained on over 66,000 brain image-label pairs.
arXiv Detail & Related papers (2025-05-01T16:06:17Z) - A Foundational Brain Dynamics Model via Stochastic Optimal Control [15.8358479596609]
We introduce a foundational model for brain dynamics that utilizes optimal control (SOC) and amortized inference.<n>Our method features a continuous-discrete state space model (SSM) that can robustly handle the intricate and noisy nature of fMRI signals.<n>Our model attains state-of-the-art results across a variety of downstream tasks, including demographic prediction, trait analysis, disease diagnosis, and prognosis.
arXiv Detail & Related papers (2025-02-07T12:57:26Z) - UniBrain: Universal Brain MRI Diagnosis with Hierarchical
Knowledge-enhanced Pre-training [66.16134293168535]
We propose a hierarchical knowledge-enhanced pre-training framework for the universal brain MRI diagnosis, termed as UniBrain.
Specifically, UniBrain leverages a large-scale dataset of 24,770 imaging-report pairs from routine diagnostics.
arXiv Detail & Related papers (2023-09-13T09:22:49Z) - Incomplete Multimodal Learning for Complex Brain Disorders Prediction [65.95783479249745]
We propose a new incomplete multimodal data integration approach that employs transformers and generative adversarial networks.
We apply our new method to predict cognitive degeneration and disease outcomes using the multimodal imaging genetic data from Alzheimer's Disease Neuroimaging Initiative cohort.
arXiv Detail & Related papers (2023-05-25T16:29:16Z) - Robust Alzheimer's Progression Modeling using Cross-Domain
Self-Supervised Deep Learning [3.0948853907734044]
We develop a cross-domain self-supervised learning approach for disease prognostic modeling as a regression problem using medical images as input.
We demonstrate that self-supervised pretraining can improve the prediction of Alzheimer's Disease progression from brain MRI.
We also show that pretraining on extended (but not labeled) brain MRI data outperforms pretraining on natural images.
arXiv Detail & Related papers (2022-11-15T23:04:15Z) - Self-supervised multimodal neuroimaging yields predictive
representations for a spectrum of Alzheimer's phenotypes [27.331511924585023]
This work presents a novel multi-scale coordinated framework for learning multiple representations from multimodal neuroimaging data.
We propose a general taxonomy of informative inductive biases to capture unique and joint information in multimodal self-supervised fusion.
We show that self-supervised models reveal disorder-relevant brain regions and multimodal links without access to the labels during pre-training.
arXiv Detail & Related papers (2022-09-07T01:37:19Z) - Multimodal foundation models are better simulators of the human brain [65.10501322822881]
We present a newly-designed multimodal foundation model pre-trained on 15 million image-text pairs.
We find that both visual and lingual encoders trained multimodally are more brain-like compared with unimodal ones.
arXiv Detail & Related papers (2022-08-17T12:36:26Z) - DeepAD: A Robust Deep Learning Model of Alzheimer's Disease Progression
for Real-World Clinical Applications [0.9999629695552196]
We propose a novel multi-task deep learning model to predict Alzheimer's disease progression.
Our model integrates high dimensional MRI features from a 3D convolutional neural network with other data modalities.
arXiv Detail & Related papers (2022-03-17T05:42:00Z) - A multi-stage machine learning model on diagnosis of esophageal
manometry [50.591267188664666]
The framework includes deep-learning models at the swallow-level stage and feature-based machine learning models at the study-level stage.
This is the first artificial-intelligence-style model to automatically predict CC diagnosis of HRM study from raw multi-swallow data.
arXiv Detail & Related papers (2021-06-25T20:09:23Z) - Modeling Shared Responses in Neuroimaging Studies through MultiView ICA [94.31804763196116]
Group studies involving large cohorts of subjects are important to draw general conclusions about brain functional organization.
We propose a novel MultiView Independent Component Analysis model for group studies, where data from each subject are modeled as a linear combination of shared independent sources plus noise.
We demonstrate the usefulness of our approach first on fMRI data, where our model demonstrates improved sensitivity in identifying common sources among subjects.
arXiv Detail & Related papers (2020-06-11T17:29:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.