Self-Supervised Pretraining on Paired Sequences of fMRI Data for
Transfer Learning to Brain Decoding Tasks
- URL: http://arxiv.org/abs/2305.09057v1
- Date: Mon, 15 May 2023 22:53:12 GMT
- Title: Self-Supervised Pretraining on Paired Sequences of fMRI Data for
Transfer Learning to Brain Decoding Tasks
- Authors: Sean Paulsen, Michael Casey
- Abstract summary: We introduce a self-supervised pretraining framework for transformers on functional Magnetic Resonance Imaging (fMRI) data.
First, we pretrain our architecture on two self-supervised tasks simultaneously to teach the model a general understanding of the temporal and spatial dynamics of human auditory cortex during music listening.
Second, we finetune the pretrained models and train additional fresh models on a supervised fMRI classification task.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work we introduce a self-supervised pretraining framework for
transformers on functional Magnetic Resonance Imaging (fMRI) data. First, we
pretrain our architecture on two self-supervised tasks simultaneously to teach
the model a general understanding of the temporal and spatial dynamics of human
auditory cortex during music listening. Our pretraining results are the first
to suggest a synergistic effect of multitask training on fMRI data. Second, we
finetune the pretrained models and train additional fresh models on a
supervised fMRI classification task. We observe significantly improved accuracy
on held-out runs with the finetuned models, which demonstrates the ability of
our pretraining tasks to facilitate transfer learning. This work contributes to
the growing body of literature on transformer architectures for pretraining and
transfer learning with fMRI data, and serves as a proof of concept for our
pretraining tasks and multitask pretraining on fMRI data.
Related papers
- Self-Supervised Pre-training Tasks for an fMRI Time-series Transformer in Autism Detection [3.665816629105171]
Autism Spectrum Disorder (ASD) is a neurodevelopmental condition that encompasses a wide variety of symptoms and degrees of impairment.
We have developed a transformer-based self-supervised framework that directly analyzes time-series fMRI data without computing functional connectivity.
We show that randomly masking entire ROIs gives better model performance than randomly masking time points in the pre-training step.
arXiv Detail & Related papers (2024-09-18T20:29:23Z) - Prompt Your Brain: Scaffold Prompt Tuning for Efficient Adaptation of fMRI Pre-trained Model [15.330413605539542]
Scaffold Prompt Tuning (ScaPT) is a novel prompt-based framework for adapting large-scale functional magnetic resonance imaging (fMRI) pre-trained models to downstream tasks.
It has high parameter efficiency and improved performance compared to fine-tuning and baselines for prompt tuning.
ScaPT outperforms fine-tuning and multitask-based prompt tuning in neurodegenerative diseases diagnosis/prognosis and personality trait prediction.
arXiv Detail & Related papers (2024-08-20T06:08:37Z) - Uncovering cognitive taskonomy through transfer learning in masked autoencoder-based fMRI reconstruction [6.3348067441225915]
We employ the masked autoencoder (MAE) model to reconstruct functional magnetic resonance imaging (fMRI) data.
Our study suggests that the fMRI reconstruction with MAE model can uncover the latent representation.
arXiv Detail & Related papers (2024-05-24T09:29:16Z) - MTP: Advancing Remote Sensing Foundation Model via Multi-Task Pretraining [73.81862342673894]
Foundation models have reshaped the landscape of Remote Sensing (RS) by enhancing various image interpretation tasks.
transferring the pretrained models to downstream tasks may encounter task discrepancy due to their formulation of pretraining as image classification or object discrimination tasks.
We conduct multi-task supervised pretraining on the SAMRS dataset, encompassing semantic segmentation, instance segmentation, and rotated object detection.
Our models are finetuned on various RS downstream tasks, such as scene classification, horizontal and rotated object detection, semantic segmentation, and change detection.
arXiv Detail & Related papers (2024-03-20T09:17:22Z) - fMRI-PTE: A Large-scale fMRI Pretrained Transformer Encoder for
Multi-Subject Brain Activity Decoding [54.17776744076334]
We propose fMRI-PTE, an innovative auto-encoder approach for fMRI pre-training.
Our approach involves transforming fMRI signals into unified 2D representations, ensuring consistency in dimensions and preserving brain activity patterns.
Our contributions encompass introducing fMRI-PTE, innovative data transformation, efficient training, a novel learning strategy, and the universal applicability of our approach.
arXiv Detail & Related papers (2023-11-01T07:24:22Z) - Understanding and Mitigating the Label Noise in Pre-training on
Downstream Tasks [91.15120211190519]
This paper aims to understand the nature of noise in pre-training datasets and to mitigate its impact on downstream tasks.
We propose a light-weight black-box tuning method (NMTune) to affine the feature space to mitigate the malignant effect of noise.
arXiv Detail & Related papers (2023-09-29T06:18:15Z) - Sequential Transfer Learning to Decode Heard and Imagined Timbre from
fMRI Data [0.0]
We present a sequential transfer learning framework for transformers on functional Magnetic Resonance Imaging (fMRI) data.
In the first phase, we pre-train our stacked-encoder transformer architecture on Next Thought Prediction.
In the second phase, we fine-tune the models and train additional fresh models on the supervised task of predicting whether or not two sequences of fMRI data were recorded while listening to the same musical timbre.
arXiv Detail & Related papers (2023-05-22T16:58:26Z) - fMRI Neurofeedback Learning Patterns are Predictive of Personal and
Clinical Traits [62.997667081978825]
We obtain a personal signature of a person's learning progress in a self-neuromodulation task, guided by functional MRI (fMRI)
The signature is based on predicting the activity of the Amygdala in a second neurofeedback session, given a similar fMRI-derived brain state in the first session.
arXiv Detail & Related papers (2021-12-21T06:52:48Z) - Pre-training and Fine-tuning Transformers for fMRI Prediction Tasks [69.85819388753579]
TFF employs a transformer-based architecture and a two-phase training approach.
Self-supervised training is applied to a collection of fMRI scans, where the model is trained for the reconstruction of 3D volume data.
Results show state-of-the-art performance on a variety of fMRI tasks, including age and gender prediction, as well as schizophrenia recognition.
arXiv Detail & Related papers (2021-12-10T18:04:26Z) - Learning Personal Representations from fMRIby Predicting Neurofeedback
Performance [52.77024349608834]
We present a deep neural network method for learning a personal representation for individuals performing a self neuromodulation task, guided by functional MRI (fMRI)
The representation is learned by a self-supervised recurrent neural network, that predicts the Amygdala activity in the next fMRI frame given recent fMRI frames and is conditioned on the learned individual representation.
arXiv Detail & Related papers (2021-12-06T10:16:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.