Multi-Dimension-Embedding-Aware Modality Fusion Transformer for
Psychiatric Disorder Clasification
- URL: http://arxiv.org/abs/2310.02690v1
- Date: Wed, 4 Oct 2023 10:02:04 GMT
- Title: Multi-Dimension-Embedding-Aware Modality Fusion Transformer for
Psychiatric Disorder Clasification
- Authors: Guoxin Wang, Xuyang Cao, Shan An, Fengmei Fan, Chao Zhang, Jinsong
Wang, Feng Yu, Zhiren Wang
- Abstract summary: We construct a deep learning architecture that takes as input 2D time series of rs-fMRI and 3D volumes T1w.
We show that our proposed MFFormer performs better than that using a single modality or multi-modality MRI on schizophrenia and bipolar disorder diagnosis.
- Score: 13.529183496842819
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning approaches, together with neuroimaging techniques, play an
important role in psychiatric disorders classification. Previous studies on
psychiatric disorders diagnosis mainly focus on using functional connectivity
matrices of resting-state functional magnetic resonance imaging (rs-fMRI) as
input, which still needs to fully utilize the rich temporal information of the
time series of rs-fMRI data. In this work, we proposed a
multi-dimension-embedding-aware modality fusion transformer (MFFormer) for
schizophrenia and bipolar disorder classification using rs-fMRI and T1 weighted
structural MRI (T1w sMRI). Concretely, to fully utilize the temporal
information of rs-fMRI and spatial information of sMRI, we constructed a deep
learning architecture that takes as input 2D time series of rs-fMRI and 3D
volumes T1w. Furthermore, to promote intra-modality attention and information
fusion across different modalities, a fusion transformer module (FTM) is
designed through extensive self-attention of hybrid feature maps of
multi-modality. In addition, a dimension-up and dimension-down strategy is
suggested to properly align feature maps of multi-dimensional from different
modalities. Experimental results on our private and public OpenfMRI datasets
show that our proposed MFFormer performs better than that using a single
modality or multi-modality MRI on schizophrenia and bipolar disorder diagnosis.
Related papers
- MindFormer: Semantic Alignment of Multi-Subject fMRI for Brain Decoding [50.55024115943266]
We introduce a novel semantic alignment method of multi-subject fMRI signals using so-called MindFormer.
This model is specifically designed to generate fMRI-conditioned feature vectors that can be used for conditioning Stable Diffusion model for fMRI- to-image generation or large language model (LLM) for fMRI-to-text generation.
Our experimental results demonstrate that MindFormer generates semantically consistent images and text across different subjects.
arXiv Detail & Related papers (2024-05-28T00:36:25Z) - An Interpretable Cross-Attentive Multi-modal MRI Fusion Framework for Schizophrenia Diagnosis [46.58592655409785]
We propose a novel Cross-Attentive Multi-modal Fusion framework (CAMF) to capture both intra-modal and inter-modal relationships between fMRI and sMRI.
Our approach significantly improves classification accuracy, as demonstrated by our evaluations on two extensive multi-modal brain imaging datasets.
The gradient-guided Score-CAM is applied to interpret critical functional networks and brain regions involved in schizophrenia.
arXiv Detail & Related papers (2024-03-29T20:32:30Z) - NeuroPictor: Refining fMRI-to-Image Reconstruction via Multi-individual Pretraining and Multi-level Modulation [55.51412454263856]
This paper proposes to directly modulate the generation process of diffusion models using fMRI signals.
By training with about 67,000 fMRI-image pairs from various individuals, our model enjoys superior fMRI-to-image decoding capacity.
arXiv Detail & Related papers (2024-03-27T02:42:52Z) - Cross-modality Guidance-aided Multi-modal Learning with Dual Attention
for MRI Brain Tumor Grading [47.50733518140625]
Brain tumor represents one of the most fatal cancers around the world, and is very common in children and the elderly.
We propose a novel cross-modality guidance-aided multi-modal learning with dual attention for addressing the task of MRI brain tumor grading.
arXiv Detail & Related papers (2024-01-17T07:54:49Z) - fMRI-PTE: A Large-scale fMRI Pretrained Transformer Encoder for
Multi-Subject Brain Activity Decoding [54.17776744076334]
We propose fMRI-PTE, an innovative auto-encoder approach for fMRI pre-training.
Our approach involves transforming fMRI signals into unified 2D representations, ensuring consistency in dimensions and preserving brain activity patterns.
Our contributions encompass introducing fMRI-PTE, innovative data transformation, efficient training, a novel learning strategy, and the universal applicability of our approach.
arXiv Detail & Related papers (2023-11-01T07:24:22Z) - Model-Guided Multi-Contrast Deep Unfolding Network for MRI
Super-resolution Reconstruction [68.80715727288514]
We show how to unfold an iterative MGDUN algorithm into a novel model-guided deep unfolding network by taking the MRI observation matrix.
In this paper, we propose a novel Model-Guided interpretable Deep Unfolding Network (MGDUN) for medical image SR reconstruction.
arXiv Detail & Related papers (2022-09-15T03:58:30Z) - Input Agnostic Deep Learning for Alzheimer's Disease Classification
Using Multimodal MRI Images [1.4848525762485871]
Alzheimer's disease (AD) is a progressive brain disorder that causes memory and functional impairments.
In this work, we utilize a multi-modal deep learning approach in classifying normal cognition, mild cognitive impairment and AD classes.
arXiv Detail & Related papers (2021-07-19T08:19:34Z) - Meta-modal Information Flow: A Method for Capturing Multimodal Modular
Disconnectivity in Schizophrenia [11.100316178148994]
We introduce a method that takes advantage of multimodal data in addressing the hypotheses of disconnectivity and dysfunction within schizophrenia (SZ)
We propose a modularity-based method that can be applied to the GGM to identify links that are associated with mental illness across a multimodal data set.
Through simulation and real data, we show our approach reveals important information about disease-related network disruptions that are missed with a focus on a single modality.
arXiv Detail & Related papers (2020-01-06T18:46:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.