Incomplete Multimodal Learning for Complex Brain Disorders Prediction
- URL: http://arxiv.org/abs/2305.16222v1
- Date: Thu, 25 May 2023 16:29:16 GMT
- Title: Incomplete Multimodal Learning for Complex Brain Disorders Prediction
- Authors: Reza Shirkavand, Liang Zhan, Heng Huang, Li Shen, Paul M. Thompson
- Abstract summary: We propose a new incomplete multimodal data integration approach that employs transformers and generative adversarial networks.
We apply our new method to predict cognitive degeneration and disease outcomes using the multimodal imaging genetic data from Alzheimer's Disease Neuroimaging Initiative cohort.
- Score: 65.95783479249745
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advancements in the acquisition of various brain data sources have
created new opportunities for integrating multimodal brain data to assist in
early detection of complex brain disorders. However, current data integration
approaches typically need a complete set of biomedical data modalities, which
may not always be feasible, as some modalities are only available in
large-scale research cohorts and are prohibitive to collect in routine clinical
practice. Especially in studies of brain diseases, research cohorts may include
both neuroimaging data and genetic data, but for practical clinical diagnosis,
we often need to make disease predictions only based on neuroimages. As a
result, it is desired to design machine learning models which can use all
available data (different data could provide complementary information) during
training but conduct inference using only the most common data modality. We
propose a new incomplete multimodal data integration approach that employs
transformers and generative adversarial networks to effectively exploit
auxiliary modalities available during training in order to improve the
performance of a unimodal model at inference. We apply our new method to
predict cognitive degeneration and disease outcomes using the multimodal
imaging genetic data from Alzheimer's Disease Neuroimaging Initiative (ADNI)
cohort. Experimental results demonstrate that our approach outperforms the
related machine learning and deep learning methods by a significant margin.
Related papers
- Predicting Infant Brain Connectivity with Federated Multi-Trajectory
GNNs using Scarce Data [54.55126643084341]
Existing deep learning solutions suffer from three major limitations.
We introduce FedGmTE-Net++, a federated graph-based multi-trajectory evolution network.
Using the power of federation, we aggregate local learnings among diverse hospitals with limited datasets.
arXiv Detail & Related papers (2024-01-01T10:20:01Z) - Building Flexible, Scalable, and Machine Learning-ready Multimodal
Oncology Datasets [17.774341783844026]
This work proposes Multimodal Integration of Oncology Data System (MINDS)
MINDS is a flexible, scalable, and cost-effective metadata framework for efficiently fusing disparate data from public sources.
By harmonizing multimodal data, MINDS aims to potentially empower researchers with greater analytical ability.
arXiv Detail & Related papers (2023-09-30T15:44:39Z) - UniBrain: Universal Brain MRI Diagnosis with Hierarchical
Knowledge-enhanced Pre-training [66.16134293168535]
We propose a hierarchical knowledge-enhanced pre-training framework for the universal brain MRI diagnosis, termed as UniBrain.
Specifically, UniBrain leverages a large-scale dataset of 24,770 imaging-report pairs from routine diagnostics.
arXiv Detail & Related papers (2023-09-13T09:22:49Z) - Multimodal Data Integration for Oncology in the Era of Deep Neural Networks: A Review [0.0]
Integrating diverse data types can improve the accuracy and reliability of cancer diagnosis and treatment.
Deep neural networks have facilitated the development of sophisticated multimodal data fusion approaches.
Recent deep learning frameworks such as Graph Neural Networks (GNNs) and Transformers have shown remarkable success in multimodal learning.
arXiv Detail & Related papers (2023-03-11T17:52:03Z) - Self-supervised multimodal neuroimaging yields predictive
representations for a spectrum of Alzheimer's phenotypes [27.331511924585023]
This work presents a novel multi-scale coordinated framework for learning multiple representations from multimodal neuroimaging data.
We propose a general taxonomy of informative inductive biases to capture unique and joint information in multimodal self-supervised fusion.
We show that self-supervised models reveal disorder-relevant brain regions and multimodal links without access to the labels during pre-training.
arXiv Detail & Related papers (2022-09-07T01:37:19Z) - DeepAD: A Robust Deep Learning Model of Alzheimer's Disease Progression
for Real-World Clinical Applications [0.9999629695552196]
We propose a novel multi-task deep learning model to predict Alzheimer's disease progression.
Our model integrates high dimensional MRI features from a 3D convolutional neural network with other data modalities.
arXiv Detail & Related papers (2022-03-17T05:42:00Z) - Evaluation and Analysis of Different Aggregation and Hyperparameter
Selection Methods for Federated Brain Tumor Segmentation [2.294014185517203]
We focus on the federated learning paradigm, a distributed learning approach for decentralized data.
Studies show that federated learning can provide competitive performance with conventional central training.
We explore different strategies for faster convergence and better performance which can also work on strong Non-IID cases.
arXiv Detail & Related papers (2022-02-16T07:49:04Z) - Cross-Modality Deep Feature Learning for Brain Tumor Segmentation [158.8192041981564]
This paper proposes a novel cross-modality deep feature learning framework to segment brain tumors from the multi-modality MRI data.
The core idea is to mine rich patterns across the multi-modality data to make up for the insufficient data scale.
Comprehensive experiments are conducted on the BraTS benchmarks, which show that the proposed cross-modality deep feature learning framework can effectively improve the brain tumor segmentation performance.
arXiv Detail & Related papers (2022-01-07T07:46:01Z) - Select-ProtoNet: Learning to Select for Few-Shot Disease Subtype
Prediction [55.94378672172967]
We focus on few-shot disease subtype prediction problem, identifying subgroups of similar patients.
We introduce meta learning techniques to develop a new model, which can extract the common experience or knowledge from interrelated clinical tasks.
Our new model is built upon a carefully designed meta-learner, called Prototypical Network, that is a simple yet effective meta learning machine for few-shot image classification.
arXiv Detail & Related papers (2020-09-02T02:50:30Z) - Modeling Shared Responses in Neuroimaging Studies through MultiView ICA [94.31804763196116]
Group studies involving large cohorts of subjects are important to draw general conclusions about brain functional organization.
We propose a novel MultiView Independent Component Analysis model for group studies, where data from each subject are modeled as a linear combination of shared independent sources plus noise.
We demonstrate the usefulness of our approach first on fMRI data, where our model demonstrates improved sensitivity in identifying common sources among subjects.
arXiv Detail & Related papers (2020-06-11T17:29:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.