Zero-shot Learning of Individualized Task Contrast Prediction from
Resting-state Functional Connectomes
- URL: http://arxiv.org/abs/2310.14105v1
- Date: Sat, 21 Oct 2023 20:12:22 GMT
- Title: Zero-shot Learning of Individualized Task Contrast Prediction from
Resting-state Functional Connectomes
- Authors: Minh Nguyen, Gia H. Ngo, Mert R. Sabuncu
- Abstract summary: It is possible to train ML models to predict subject-specific task-evoked activity using resting-state functional MRI (rsfMRI) scans.
While rsfMRI scans are relatively easy to collect, obtaining sufficient task fMRI scans is much harder as it involves more complex experimental designs and procedures.
We show that this reliance can be reduced by leveraging group-average contrasts, enabling zero-shot predictions for novel tasks.
- Score: 9.78824332635036
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Given sufficient pairs of resting-state and task-evoked fMRI scans from
subjects, it is possible to train ML models to predict subject-specific
task-evoked activity using resting-state functional MRI (rsfMRI) scans.
However, while rsfMRI scans are relatively easy to collect, obtaining
sufficient task fMRI scans is much harder as it involves more complex
experimental designs and procedures. Thus, the reliance on scarce paired data
limits the application of current techniques to only tasks seen during
training. We show that this reliance can be reduced by leveraging group-average
contrasts, enabling zero-shot predictions for novel tasks. Our approach, named
OPIC (short for Omni-Task Prediction of Individual Contrasts), takes as input a
subject's rsfMRI-derived connectome and a group-average contrast, to produce a
prediction of the subject-specific contrast. Similar to zero-shot learning in
large language models using special inputs to obtain answers for novel natural
language processing tasks, inputting group-average contrasts guides the OPIC
model to generalize to novel tasks unseen in training. Experimental results
show that OPIC's predictions for novel tasks are not only better than simple
group-averages, but are also competitive with a state-of-the-art model's
in-domain predictions that was trained using in-domain tasks' data.
Related papers
- Self-Supervised Neuron Segmentation with Multi-Agent Reinforcement
Learning [53.00683059396803]
Mask image model (MIM) has been widely used due to its simplicity and effectiveness in recovering original information from masked images.
We propose a decision-based MIM that utilizes reinforcement learning (RL) to automatically search for optimal image masking ratio and masking strategy.
Our approach has a significant advantage over alternative self-supervised methods on the task of neuron segmentation.
arXiv Detail & Related papers (2023-10-06T10:40:46Z) - Self-Supervised Pre-Training with Contrastive and Masked Autoencoder
Methods for Dealing with Small Datasets in Deep Learning for Medical Imaging [8.34398674359296]
Deep learning in medical imaging has the potential to minimize the risk of diagnostic errors, reduce radiologist workload, and accelerate diagnosis.
Training such deep learning models requires large and accurate datasets, with annotations for all training samples.
To address this challenge, deep learning models can be pre-trained on large image datasets without annotations using methods from the field of self-supervised learning.
arXiv Detail & Related papers (2023-08-12T11:31:01Z) - Multi-Level Contrastive Learning for Dense Prediction Task [59.591755258395594]
We present Multi-Level Contrastive Learning for Dense Prediction Task (MCL), an efficient self-supervised method for learning region-level feature representation for dense prediction tasks.
Our method is motivated by the three key factors in detection: localization, scale consistency and recognition.
Our method consistently outperforms the recent state-of-the-art methods on various datasets with significant margins.
arXiv Detail & Related papers (2023-04-04T17:59:04Z) - Vision-Language Modelling For Radiological Imaging and Reports In The
Low Data Regime [70.04389979779195]
This paper explores training medical vision-language models (VLMs) where the visual and language inputs are embedded into a common space.
We explore several candidate methods to improve low-data performance, including adapting generic pre-trained models to novel image and text domains.
Using text-to-image retrieval as a benchmark, we evaluate the performance of these methods with variable sized training datasets of paired chest X-rays and radiological reports.
arXiv Detail & Related papers (2023-03-30T18:20:00Z) - Learning to Exploit Temporal Structure for Biomedical Vision-Language
Processing [53.89917396428747]
Self-supervised learning in vision-language processing exploits semantic alignment between imaging and text modalities.
We explicitly account for prior images and reports when available during both training and fine-tuning.
Our approach, named BioViL-T, uses a CNN-Transformer hybrid multi-image encoder trained jointly with a text model.
arXiv Detail & Related papers (2023-01-11T16:35:33Z) - Centroids Matching: an efficient Continual Learning approach operating
in the embedding space [15.705568893476947]
Catastrophic forgetting (CF) occurs when a neural network loses the information previously learned while training on a set of samples from a different distribution.
We propose a novel regularization method called Centroids Matching, that fights CF by operating in the feature space produced by the neural network.
arXiv Detail & Related papers (2022-08-03T13:17:16Z) - Uni-Perceiver: Pre-training Unified Architecture for Generic Perception
for Zero-shot and Few-shot Tasks [73.63892022944198]
We present a generic perception architecture named Uni-Perceiver.
It processes a variety of modalities and tasks with unified modeling and shared parameters.
Results show that our pre-trained model without any tuning can achieve reasonable performance even on novel tasks.
arXiv Detail & Related papers (2021-12-02T18:59:50Z) - Zero-Shot Self-Supervised Learning for MRI Reconstruction [4.542616945567623]
We propose a zero-shot self-supervised learning approach to perform subject-specific accelerated DL MRI reconstruction.
The proposed approach partitions the available measurements from a single scan into three disjoint sets.
In the presence of models pre-trained on a database with different image characteristics, we show that the proposed approach can be combined with transfer learning for faster convergence time and reduced computational complexity.
arXiv Detail & Related papers (2021-02-15T18:34:38Z) - Shared Space Transfer Learning for analyzing multi-site fMRI data [83.41324371491774]
Multi-voxel pattern analysis (MVPA) learns predictive models from task-based functional magnetic resonance imaging (fMRI) data.
MVPA works best with a well-designed feature set and an adequate sample size.
Most fMRI datasets are noisy, high-dimensional, expensive to collect, and with small sample sizes.
This paper proposes the Shared Space Transfer Learning (SSTL) as a novel transfer learning approach.
arXiv Detail & Related papers (2020-10-24T08:50:26Z) - From Connectomic to Task-evoked Fingerprints: Individualized Prediction
of Task Contrasts from Resting-state Functional Connectivity [17.020869686284165]
Resting-state functional MRI (rsfMRI) yields functional connectomes that can serve as cognitive fingerprints of individuals.
We propose a surface-based convolutional neural network (BrainSurfCNN) model to predict individual task contrasts from their resting-state fingerprints.
arXiv Detail & Related papers (2020-08-07T02:44:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.