Self-supervised multimodal neuroimaging yields predictive
representations for a spectrum of Alzheimer's phenotypes
- URL: http://arxiv.org/abs/2209.02876v1
- Date: Wed, 7 Sep 2022 01:37:19 GMT
- Title: Self-supervised multimodal neuroimaging yields predictive
representations for a spectrum of Alzheimer's phenotypes
- Authors: Alex Fedorov, Eloy Geenjaar, Lei Wu, Tristan Sylvain, Thomas P.
DeRamus, Margaux Luck, Maria Misiura, R Devon Hjelm, Sergey M. Plis, Vince D.
Calhoun
- Abstract summary: This work presents a novel multi-scale coordinated framework for learning multiple representations from multimodal neuroimaging data.
We propose a general taxonomy of informative inductive biases to capture unique and joint information in multimodal self-supervised fusion.
We show that self-supervised models reveal disorder-relevant brain regions and multimodal links without access to the labels during pre-training.
- Score: 27.331511924585023
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent neuroimaging studies that focus on predicting brain disorders via
modern machine learning approaches commonly include a single modality and rely
on supervised over-parameterized models.However, a single modality provides
only a limited view of the highly complex brain. Critically, supervised models
in clinical settings lack accurate diagnostic labels for training. Coarse
labels do not capture the long-tailed spectrum of brain disorder phenotypes,
which leads to a loss of generalizability of the model that makes them less
useful in diagnostic settings. This work presents a novel multi-scale
coordinated framework for learning multiple representations from multimodal
neuroimaging data. We propose a general taxonomy of informative inductive
biases to capture unique and joint information in multimodal self-supervised
fusion. The taxonomy forms a family of decoder-free models with reduced
computational complexity and a propensity to capture multi-scale relationships
between local and global representations of the multimodal inputs. We conduct a
comprehensive evaluation of the taxonomy using functional and structural
magnetic resonance imaging (MRI) data across a spectrum of Alzheimer's disease
phenotypes and show that self-supervised models reveal disorder-relevant brain
regions and multimodal links without access to the labels during pre-training.
The proposed multimodal self-supervised learning yields representations with
improved classification performance for both modalities. The concomitant rich
and flexible unsupervised deep learning framework captures complex multimodal
relationships and provides predictive performance that meets or exceeds that of
a more narrow supervised classification analysis. We present elaborate
quantitative evidence of how this framework can significantly advance our
search for missing links in complex brain disorders.
Related papers
- Generative forecasting of brain activity enhances Alzheimer's classification and interpretation [16.09844316281377]
Resting-state functional magnetic resonance imaging (rs-fMRI) offers a non-invasive method to monitor neural activity.
Deep learning has shown promise in capturing these representations.
In this study, we focus on time series forecasting of independent component networks derived from rs-fMRI as a form of data augmentation.
arXiv Detail & Related papers (2024-10-30T23:51:31Z) - UniBrain: Universal Brain MRI Diagnosis with Hierarchical
Knowledge-enhanced Pre-training [66.16134293168535]
We propose a hierarchical knowledge-enhanced pre-training framework for the universal brain MRI diagnosis, termed as UniBrain.
Specifically, UniBrain leverages a large-scale dataset of 24,770 imaging-report pairs from routine diagnostics.
arXiv Detail & Related papers (2023-09-13T09:22:49Z) - Incomplete Multimodal Learning for Complex Brain Disorders Prediction [65.95783479249745]
We propose a new incomplete multimodal data integration approach that employs transformers and generative adversarial networks.
We apply our new method to predict cognitive degeneration and disease outcomes using the multimodal imaging genetic data from Alzheimer's Disease Neuroimaging Initiative cohort.
arXiv Detail & Related papers (2023-05-25T16:29:16Z) - Multimodal foundation models are better simulators of the human brain [65.10501322822881]
We present a newly-designed multimodal foundation model pre-trained on 15 million image-text pairs.
We find that both visual and lingual encoders trained multimodally are more brain-like compared with unimodal ones.
arXiv Detail & Related papers (2022-08-17T12:36:26Z) - Cross-Modality Neuroimage Synthesis: A Survey [71.27193056354741]
Multi-modality imaging improves disease diagnosis and reveals distinct deviations in tissues with anatomical properties.
The existence of completely aligned and paired multi-modality neuroimaging data has proved its effectiveness in brain research.
An alternative solution is to explore unsupervised or weakly supervised learning methods to synthesize the absent neuroimaging data.
arXiv Detail & Related papers (2022-02-14T19:29:08Z) - Multimodal Representations Learning and Adversarial Hypergraph Fusion
for Early Alzheimer's Disease Prediction [30.99183477161096]
We propose a novel representation learning and adversarial hypergraph fusion framework for Alzheimer's disease diagnosis.
Our model achieves superior performance on Alzheimer's disease detection compared with other related models.
arXiv Detail & Related papers (2021-07-21T08:08:05Z) - G-MIND: An End-to-End Multimodal Imaging-Genetics Framework for
Biomarker Identification and Disease Classification [49.53651166356737]
We propose a novel deep neural network architecture to integrate imaging and genetics data, as guided by diagnosis, that provides interpretable biomarkers.
We have evaluated our model on a population study of schizophrenia that includes two functional MRI (fMRI) paradigms and Single Nucleotide Polymorphism (SNP) data.
arXiv Detail & Related papers (2021-01-27T19:28:04Z) - Ensemble manifold based regularized multi-modal graph convolutional
network for cognitive ability prediction [33.03449099154264]
Multi-modal functional magnetic resonance imaging (fMRI) can be used to make predictions about individual behavioral and cognitive traits based on brain connectivity networks.
We propose an interpretable multi-modal graph convolutional network (MGCN) model, incorporating the fMRI time series and the functional connectivity (FC) between each pair of brain regions.
We validate our MGCN model on the Philadelphia Neurodevelopmental Cohort to predict individual wide range achievement test (WRAT) score.
arXiv Detail & Related papers (2021-01-20T20:53:07Z) - Self-Supervised Multimodal Domino: in Search of Biomarkers for
Alzheimer's Disease [19.86082635340699]
We propose a taxonomy of all reasonable ways to organize self-supervised representation-learning algorithms.
We first evaluate models on toy multimodal MNIST datasets and then apply them to a multimodal neuroimaging dataset with Alzheimer's disease patients.
Results show that the proposed approach outperforms previous self-supervised encoder-decoder methods.
arXiv Detail & Related papers (2020-12-25T20:28:13Z) - On self-supervised multi-modal representation learning: An application
to Alzheimer's disease [21.495288589801476]
Introspection of deep supervised predictive models trained on functional and structural brain imaging may uncover novel markers of Alzheimer's disease (AD)
Deep unsupervised and, recently, contrastive self-supervised approaches, not biased to classification, are better candidates for the task.
arXiv Detail & Related papers (2020-12-25T19:51:19Z) - Modeling Shared Responses in Neuroimaging Studies through MultiView ICA [94.31804763196116]
Group studies involving large cohorts of subjects are important to draw general conclusions about brain functional organization.
We propose a novel MultiView Independent Component Analysis model for group studies, where data from each subject are modeled as a linear combination of shared independent sources plus noise.
We demonstrate the usefulness of our approach first on fMRI data, where our model demonstrates improved sensitivity in identifying common sources among subjects.
arXiv Detail & Related papers (2020-06-11T17:29:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.