Meta-modal Information Flow: A Method for Capturing Multimodal Modular
Disconnectivity in Schizophrenia
- URL: http://arxiv.org/abs/2001.01707v1
- Date: Mon, 6 Jan 2020 18:46:41 GMT
- Title: Meta-modal Information Flow: A Method for Capturing Multimodal Modular
Disconnectivity in Schizophrenia
- Authors: Haleh Falakshahi, Victor M. Vergara, Jingyu Liu, Daniel H. Mathalon,
Judith M. Ford, James Voyvodic, Bryon A. Mueller, Aysenil Belger, Sarah
McEwen, Steven G. Potkin, Adrian Preda, Hooman Rokham, Jing Sui, Jessica A.
Turner, Sergey Plis, and Vince D. Calhoun
- Abstract summary: We introduce a method that takes advantage of multimodal data in addressing the hypotheses of disconnectivity and dysfunction within schizophrenia (SZ)
We propose a modularity-based method that can be applied to the GGM to identify links that are associated with mental illness across a multimodal data set.
Through simulation and real data, we show our approach reveals important information about disease-related network disruptions that are missed with a focus on a single modality.
- Score: 11.100316178148994
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Objective: Multimodal measurements of the same phenomena provide
complementary information and highlight different perspectives, albeit each
with their own limitations. A focus on a single modality may lead to incorrect
inferences, which is especially important when a studied phenomenon is a
disease. In this paper, we introduce a method that takes advantage of
multimodal data in addressing the hypotheses of disconnectivity and dysfunction
within schizophrenia (SZ). Methods: We start with estimating and visualizing
links within and among extracted multimodal data features using a Gaussian
graphical model (GGM). We then propose a modularity-based method that can be
applied to the GGM to identify links that are associated with mental illness
across a multimodal data set. Through simulation and real data, we show our
approach reveals important information about disease-related network
disruptions that are missed with a focus on a single modality. We use
functional MRI (fMRI), diffusion MRI (dMRI), and structural MRI (sMRI) to
compute the fractional amplitude of low frequency fluctuations (fALFF),
fractional anisotropy (FA), and gray matter (GM) concentration maps. These
three modalities are analyzed using our modularity method. Results: Our results
show missing links that are only captured by the cross-modal information that
may play an important role in disconnectivity between the components.
Conclusion: We identified multimodal (fALFF, FA and GM) disconnectivity in the
default mode network area in patients with SZ, which would not have been
detectable in a single modality. Significance: The proposed approach provides
an important new tool for capturing information that is distributed among
multiple imaging modalities.
Related papers
- Augmentation-based Unsupervised Cross-Domain Functional MRI Adaptation for Major Depressive Disorder Identification [23.639488571585044]
Major depressive disorder (MDD) is a common mental disorder that typically affects a person's mood, cognition, behavior, and physical health.
In this work, we propose a new augmentation-based unsupervised cross-domain fMRI adaptation framework for automatic diagnosis of MDD.
arXiv Detail & Related papers (2024-05-31T13:55:33Z) - MindFormer: Semantic Alignment of Multi-Subject fMRI for Brain Decoding [50.55024115943266]
We introduce a novel semantic alignment method of multi-subject fMRI signals using so-called MindFormer.
This model is specifically designed to generate fMRI-conditioned feature vectors that can be used for conditioning Stable Diffusion model for fMRI- to-image generation or large language model (LLM) for fMRI-to-text generation.
Our experimental results demonstrate that MindFormer generates semantically consistent images and text across different subjects.
arXiv Detail & Related papers (2024-05-28T00:36:25Z) - FORESEE: Multimodal and Multi-view Representation Learning for Robust Prediction of Cancer Survival [3.4686401890974197]
We propose a new end-to-end framework, FORESEE, for robustly predicting patient survival by mining multimodal information.
Cross-fusion transformer effectively utilizes features at the cellular level, tissue level, and tumor heterogeneity level to correlate prognosis.
The hybrid attention encoder (HAE) uses the denoising contextual attention module to obtain the contextual relationship features.
We also propose an asymmetrically masked triplet masked autoencoder to reconstruct lost information within modalities.
arXiv Detail & Related papers (2024-05-13T12:39:08Z) - NeuroPictor: Refining fMRI-to-Image Reconstruction via Multi-individual Pretraining and Multi-level Modulation [55.51412454263856]
This paper proposes to directly modulate the generation process of diffusion models using fMRI signals.
By training with about 67,000 fMRI-image pairs from various individuals, our model enjoys superior fMRI-to-image decoding capacity.
arXiv Detail & Related papers (2024-03-27T02:42:52Z) - Cross-modality Guidance-aided Multi-modal Learning with Dual Attention
for MRI Brain Tumor Grading [47.50733518140625]
Brain tumor represents one of the most fatal cancers around the world, and is very common in children and the elderly.
We propose a novel cross-modality guidance-aided multi-modal learning with dual attention for addressing the task of MRI brain tumor grading.
arXiv Detail & Related papers (2024-01-17T07:54:49Z) - Source-Free Collaborative Domain Adaptation via Multi-Perspective
Feature Enrichment for Functional MRI Analysis [55.03872260158717]
Resting-state MRI functional (rs-fMRI) is increasingly employed in multi-site research to aid neurological disorder analysis.
Many methods have been proposed to reduce fMRI heterogeneity between source and target domains.
But acquiring source data is challenging due to concerns and/or data storage burdens in multi-site studies.
We design a source-free collaborative domain adaptation framework for fMRI analysis, where only a pretrained source model and unlabeled target data are accessible.
arXiv Detail & Related papers (2023-08-24T01:30:18Z) - Robust Fiber ODF Estimation Using Deep Constrained Spherical
Deconvolution for Diffusion MRI [7.9283612449524155]
A common practice to model the measured DW-MRI signal is via fiber orientation distribution function (fODF)
measurement variabilities (e.g., inter- and intra-site variability, hardware performance, and sequence design) are inevitable during the acquisition of DW-MRI.
Most existing model-based methods (e.g., constrained spherical deconvolution (CSD)) and learning based methods (e.g., deep learning (DL)) do not explicitly consider such variabilities in fODF modeling.
We propose a novel data-driven deep constrained spherical deconvolution method to
arXiv Detail & Related papers (2023-06-05T14:06:40Z) - Cross-Modal Causal Intervention for Medical Report Generation [109.83549148448469]
Medical report generation (MRG) is essential for computer-aided diagnosis and medication guidance.
Due to the spurious correlations within image-text data induced by visual and linguistic biases, it is challenging to generate accurate reports reliably describing lesion areas.
We propose a novel Visual-Linguistic Causal Intervention (VLCI) framework for MRG, which consists of a visual deconfounding module (VDM) and a linguistic deconfounding module (LDM)
arXiv Detail & Related papers (2023-03-16T07:23:55Z) - Deep Learning based Multi-modal Computing with Feature Disentanglement
for MRI Image Synthesis [8.363448006582065]
We propose a deep learning based multi-modal computing model for MRI synthesis with feature disentanglement strategy.
The proposed approach decomposes each input modality into modality-invariant space with shared information and modality-specific space with specific information.
To address the lack of specific information of the target modality in the test phase, a local adaptive fusion (LAF) module is adopted to generate a modality-like pseudo-target.
arXiv Detail & Related papers (2021-05-06T17:22:22Z) - Robust Multimodal Brain Tumor Segmentation via Feature Disentanglement
and Gated Fusion [71.87627318863612]
We propose a novel multimodal segmentation framework which is robust to the absence of imaging modalities.
Our network uses feature disentanglement to decompose the input modalities into the modality-specific appearance code.
We validate our method on the important yet challenging multimodal brain tumor segmentation task with the BRATS challenge dataset.
arXiv Detail & Related papers (2020-02-22T14:32:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.