Learning Sequential Information in Task-based fMRI for Synthetic Data
Augmentation
- URL: http://arxiv.org/abs/2308.15564v1
- Date: Tue, 29 Aug 2023 18:36:21 GMT
- Title: Learning Sequential Information in Task-based fMRI for Synthetic Data
Augmentation
- Authors: Jiyao Wang, Nicha C. Dvornek, Lawrence H. Staib, and James S. Duncan
- Abstract summary: We propose an approach for generating synthetic fMRI sequences that can be used to create augmented training datasets in downstream learning.
The synthetic images are evaluated from multiple perspectives including visualizations and an autism spectrum disorder (ASD) classification task.
- Score: 10.629487323161323
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Insufficiency of training data is a persistent issue in medical image
analysis, especially for task-based functional magnetic resonance images (fMRI)
with spatio-temporal imaging data acquired using specific cognitive tasks. In
this paper, we propose an approach for generating synthetic fMRI sequences that
can then be used to create augmented training datasets in downstream learning
tasks. To synthesize high-resolution task-specific fMRI, we adapt the
$\alpha$-GAN structure, leveraging advantages of both GAN and variational
autoencoder models, and propose different alternatives in aggregating temporal
information. The synthetic images are evaluated from multiple perspectives
including visualizations and an autism spectrum disorder (ASD) classification
task. The results show that the synthetic task-based fMRI can provide effective
data augmentation in learning the ASD classification task.
Related papers
- Towards General Text-guided Image Synthesis for Customized Multimodal Brain MRI Generation [51.28453192441364]
Multimodal brain magnetic resonance (MR) imaging is indispensable in neuroscience and neurology.
Current MR image synthesis approaches are typically trained on independent datasets for specific tasks.
We present TUMSyn, a Text-guided Universal MR image Synthesis model, which can flexibly generate brain MR images.
arXiv Detail & Related papers (2024-09-25T11:14:47Z) - Self-Supervised Pre-training Tasks for an fMRI Time-series Transformer in Autism Detection [3.665816629105171]
Autism Spectrum Disorder (ASD) is a neurodevelopmental condition that encompasses a wide variety of symptoms and degrees of impairment.
We have developed a transformer-based self-supervised framework that directly analyzes time-series fMRI data without computing functional connectivity.
We show that randomly masking entire ROIs gives better model performance than randomly masking time points in the pre-training step.
arXiv Detail & Related papers (2024-09-18T20:29:23Z) - MindFormer: Semantic Alignment of Multi-Subject fMRI for Brain Decoding [50.55024115943266]
We introduce a novel semantic alignment method of multi-subject fMRI signals using so-called MindFormer.
This model is specifically designed to generate fMRI-conditioned feature vectors that can be used for conditioning Stable Diffusion model for fMRI- to-image generation or large language model (LLM) for fMRI-to-text generation.
Our experimental results demonstrate that MindFormer generates semantically consistent images and text across different subjects.
arXiv Detail & Related papers (2024-05-28T00:36:25Z) - Uncovering cognitive taskonomy through transfer learning in masked autoencoder-based fMRI reconstruction [6.3348067441225915]
We employ the masked autoencoder (MAE) model to reconstruct functional magnetic resonance imaging (fMRI) data.
Our study suggests that the fMRI reconstruction with MAE model can uncover the latent representation.
arXiv Detail & Related papers (2024-05-24T09:29:16Z) - NeuroPictor: Refining fMRI-to-Image Reconstruction via Multi-individual Pretraining and Multi-level Modulation [55.51412454263856]
This paper proposes to directly modulate the generation process of diffusion models using fMRI signals.
By training with about 67,000 fMRI-image pairs from various individuals, our model enjoys superior fMRI-to-image decoding capacity.
arXiv Detail & Related papers (2024-03-27T02:42:52Z) - fMRI-PTE: A Large-scale fMRI Pretrained Transformer Encoder for
Multi-Subject Brain Activity Decoding [54.17776744076334]
We propose fMRI-PTE, an innovative auto-encoder approach for fMRI pre-training.
Our approach involves transforming fMRI signals into unified 2D representations, ensuring consistency in dimensions and preserving brain activity patterns.
Our contributions encompass introducing fMRI-PTE, innovative data transformation, efficient training, a novel learning strategy, and the universal applicability of our approach.
arXiv Detail & Related papers (2023-11-01T07:24:22Z) - Source-Free Collaborative Domain Adaptation via Multi-Perspective
Feature Enrichment for Functional MRI Analysis [55.03872260158717]
Resting-state MRI functional (rs-fMRI) is increasingly employed in multi-site research to aid neurological disorder analysis.
Many methods have been proposed to reduce fMRI heterogeneity between source and target domains.
But acquiring source data is challenging due to concerns and/or data storage burdens in multi-site studies.
We design a source-free collaborative domain adaptation framework for fMRI analysis, where only a pretrained source model and unlabeled target data are accessible.
arXiv Detail & Related papers (2023-08-24T01:30:18Z) - On Sensitivity and Robustness of Normalization Schemes to Input
Distribution Shifts in Automatic MR Image Diagnosis [58.634791552376235]
Deep Learning (DL) models have achieved state-of-the-art performance in diagnosing multiple diseases using reconstructed images as input.
DL models are sensitive to varying artifacts as it leads to changes in the input data distribution between the training and testing phases.
We propose to use other normalization techniques, such as Group Normalization and Layer Normalization, to inject robustness into model performance against varying image artifacts.
arXiv Detail & Related papers (2023-06-23T03:09:03Z) - Deep Learning based Multi-modal Computing with Feature Disentanglement
for MRI Image Synthesis [8.363448006582065]
We propose a deep learning based multi-modal computing model for MRI synthesis with feature disentanglement strategy.
The proposed approach decomposes each input modality into modality-invariant space with shared information and modality-specific space with specific information.
To address the lack of specific information of the target modality in the test phase, a local adaptive fusion (LAF) module is adopted to generate a modality-like pseudo-target.
arXiv Detail & Related papers (2021-05-06T17:22:22Z) - EEG to fMRI Synthesis: Is Deep Learning a candidate? [0.913755431537592]
This work provides the first comprehensive on how to use state-of-the-art principles from Neural Processing to synthesize fMRI data from electroencephalographic (EEG) view data.
A comparison of state-of-the-art synthesis approaches, including Autoencoders, Generative Adrial Networks and Pairwise Learning, is undertaken.
Results highlight the feasibility of EEG to fMRI brain image mappings, pinpointing the role of current advances in Machine Learning and showing the relevance of upcoming contributions to further improve performance.
arXiv Detail & Related papers (2020-09-29T16:29:20Z) - Attend and Decode: 4D fMRI Task State Decoding Using Attention Models [2.6954666679827137]
We present a novel architecture called Brain Attend and Decode (BAnD)
BAnD uses residual convolutional neural networks for spatial feature extraction and self-attention mechanisms temporal modeling.
We achieve significant performance gain compared to previous works on a 7-task benchmark from the Human Connectome Project-Young Adult dataset.
arXiv Detail & Related papers (2020-04-10T21:29:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.