Pre-training and Fine-tuning Transformers for fMRI Prediction Tasks
- URL: http://arxiv.org/abs/2112.05761v1
- Date: Fri, 10 Dec 2021 18:04:26 GMT
- Title: Pre-training and Fine-tuning Transformers for fMRI Prediction Tasks
- Authors: Itzik Malkiel, Gony Rosenman, Lior Wolf, Talma Hendler
- Abstract summary: TFF employs a transformer-based architecture and a two-phase training approach.
Self-supervised training is applied to a collection of fMRI scans, where the model is trained for the reconstruction of 3D volume data.
Results show state-of-the-art performance on a variety of fMRI tasks, including age and gender prediction, as well as schizophrenia recognition.
- Score: 69.85819388753579
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present the TFF Transformer framework for the analysis of functional
Magnetic Resonance Imaging (fMRI) data. TFF employs a transformer-based
architecture and a two-phase training approach. First, self-supervised training
is applied to a collection of fMRI scans, where the model is trained for the
reconstruction of 3D volume data. Second, the pre-trained model is fine-tuned
on specific tasks, utilizing ground truth labels. Our results show
state-of-the-art performance on a variety of fMRI tasks, including age and
gender prediction, as well as schizophrenia recognition.
Related papers
- Self-Supervised Pre-training Tasks for an fMRI Time-series Transformer in Autism Detection [3.665816629105171]
Autism Spectrum Disorder (ASD) is a neurodevelopmental condition that encompasses a wide variety of symptoms and degrees of impairment.
We have developed a transformer-based self-supervised framework that directly analyzes time-series fMRI data without computing functional connectivity.
We show that randomly masking entire ROIs gives better model performance than randomly masking time points in the pre-training step.
arXiv Detail & Related papers (2024-09-18T20:29:23Z) - MindFormer: Semantic Alignment of Multi-Subject fMRI for Brain Decoding [50.55024115943266]
We introduce a novel semantic alignment method of multi-subject fMRI signals using so-called MindFormer.
This model is specifically designed to generate fMRI-conditioned feature vectors that can be used for conditioning Stable Diffusion model for fMRI- to-image generation or large language model (LLM) for fMRI-to-text generation.
Our experimental results demonstrate that MindFormer generates semantically consistent images and text across different subjects.
arXiv Detail & Related papers (2024-05-28T00:36:25Z) - NeuroPictor: Refining fMRI-to-Image Reconstruction via Multi-individual Pretraining and Multi-level Modulation [55.51412454263856]
This paper proposes to directly modulate the generation process of diffusion models using fMRI signals.
By training with about 67,000 fMRI-image pairs from various individuals, our model enjoys superior fMRI-to-image decoding capacity.
arXiv Detail & Related papers (2024-03-27T02:42:52Z) - fMRI-PTE: A Large-scale fMRI Pretrained Transformer Encoder for
Multi-Subject Brain Activity Decoding [54.17776744076334]
We propose fMRI-PTE, an innovative auto-encoder approach for fMRI pre-training.
Our approach involves transforming fMRI signals into unified 2D representations, ensuring consistency in dimensions and preserving brain activity patterns.
Our contributions encompass introducing fMRI-PTE, innovative data transformation, efficient training, a novel learning strategy, and the universal applicability of our approach.
arXiv Detail & Related papers (2023-11-01T07:24:22Z) - Customizing General-Purpose Foundation Models for Medical Report
Generation [64.31265734687182]
The scarcity of labelled medical image-report pairs presents great challenges in the development of deep and large-scale neural networks.
We propose customizing off-the-shelf general-purpose large-scale pre-trained models, i.e., foundation models (FMs) in computer vision and natural language processing.
arXiv Detail & Related papers (2023-06-09T03:02:36Z) - Sequential Transfer Learning to Decode Heard and Imagined Timbre from
fMRI Data [0.0]
We present a sequential transfer learning framework for transformers on functional Magnetic Resonance Imaging (fMRI) data.
In the first phase, we pre-train our stacked-encoder transformer architecture on Next Thought Prediction.
In the second phase, we fine-tune the models and train additional fresh models on the supervised task of predicting whether or not two sequences of fMRI data were recorded while listening to the same musical timbre.
arXiv Detail & Related papers (2023-05-22T16:58:26Z) - Self-Supervised Pretraining on Paired Sequences of fMRI Data for
Transfer Learning to Brain Decoding Tasks [0.0]
We introduce a self-supervised pretraining framework for transformers on functional Magnetic Resonance Imaging (fMRI) data.
First, we pretrain our architecture on two self-supervised tasks simultaneously to teach the model a general understanding of the temporal and spatial dynamics of human auditory cortex during music listening.
Second, we finetune the pretrained models and train additional fresh models on a supervised fMRI classification task.
arXiv Detail & Related papers (2023-05-15T22:53:12Z) - Attentive Symmetric Autoencoder for Brain MRI Segmentation [56.02577247523737]
We propose a novel Attentive Symmetric Auto-encoder based on Vision Transformer (ViT) for 3D brain MRI segmentation tasks.
In the pre-training stage, the proposed auto-encoder pays more attention to reconstruct the informative patches according to the gradient metrics.
Experimental results show that our proposed attentive symmetric auto-encoder outperforms the state-of-the-art self-supervised learning methods and medical image segmentation models.
arXiv Detail & Related papers (2022-09-19T09:43:19Z) - Medical Transformer: Universal Brain Encoder for 3D MRI Analysis [1.6287500717172143]
Existing 3D-based methods have transferred the pre-trained models to downstream tasks.
They demand a massive amount of parameters to train the model for 3D medical imaging.
We propose a novel transfer learning framework, called Medical Transformer, that effectively models 3D volumetric images in the form of a sequence of 2D image slices.
arXiv Detail & Related papers (2021-04-28T08:34:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.