MaxCorrMGNN: A Multi-Graph Neural Network Framework for Generalized
Multimodal Fusion of Medical Data for Outcome Prediction
- URL: http://arxiv.org/abs/2307.07093v1
- Date: Thu, 13 Jul 2023 23:52:41 GMT
- Title: MaxCorrMGNN: A Multi-Graph Neural Network Framework for Generalized
Multimodal Fusion of Medical Data for Outcome Prediction
- Authors: Niharika S. D'Souza, Hongzhi Wang, Andrea Giovannini, Antonio
Foncubierta-Rodriguez, Kristen L. Beck, Orest Boyko, Tanveer Syeda-Mahmood
- Abstract summary: We develop an innovative fusion approach called MaxCorr MGNN that models non-linear modality correlations within and across patients.
We then design, for the first time, a generalized multi-layered graph neural network (MGNN) for task-informed reasoning in multi-layered graphs.
We evaluate our model an outcome prediction task on a Tuberculosis dataset consistently outperforming several state-of-the-art neural, graph-based and traditional fusion techniques.
- Score: 3.2889220522843625
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the emergence of multimodal electronic health records, the evidence for
an outcome may be captured across multiple modalities ranging from clinical to
imaging and genomic data. Predicting outcomes effectively requires fusion
frameworks capable of modeling fine-grained and multi-faceted complex
interactions between modality features within and across patients. We develop
an innovative fusion approach called MaxCorr MGNN that models non-linear
modality correlations within and across patients through
Hirschfeld-Gebelein-Renyi maximal correlation (MaxCorr) embeddings, resulting
in a multi-layered graph that preserves the identities of the modalities and
patients. We then design, for the first time, a generalized multi-layered graph
neural network (MGNN) for task-informed reasoning in multi-layered graphs, that
learns the parameters defining patient-modality graph connectivity and message
passing in an end-to-end fashion. We evaluate our model an outcome prediction
task on a Tuberculosis (TB) dataset consistently outperforming several
state-of-the-art neural, graph-based and traditional fusion techniques.
Related papers
- GTP-4o: Modality-prompted Heterogeneous Graph Learning for Omni-modal Biomedical Representation [68.63955715643974]
Modality-prompted Heterogeneous Graph for Omnimodal Learning (GTP-4o)
We propose an innovative Modality-prompted Heterogeneous Graph for Omnimodal Learning (GTP-4o)
arXiv Detail & Related papers (2024-07-08T01:06:13Z) - Predicting Infant Brain Connectivity with Federated Multi-Trajectory
GNNs using Scarce Data [54.55126643084341]
Existing deep learning solutions suffer from three major limitations.
We introduce FedGmTE-Net++, a federated graph-based multi-trajectory evolution network.
Using the power of federation, we aggregate local learnings among diverse hospitals with limited datasets.
arXiv Detail & Related papers (2024-01-01T10:20:01Z) - HEALNet: Multimodal Fusion for Heterogeneous Biomedical Data [10.774128925670183]
This paper presents the Hybrid Early-fusion Attention Learning Network (HEALNet), a flexible multimodal fusion architecture.
We conduct multimodal survival analysis on Whole Slide Images and Multi-omic data on four cancer datasets from The Cancer Genome Atlas (TCGA)
HEALNet achieves state-of-the-art performance compared to other end-to-end trained fusion models.
arXiv Detail & Related papers (2023-11-15T17:06:26Z) - Integration of Graph Neural Network and Neural-ODEs for Tumor Dynamic Prediction [4.850774880198265]
We propose a graph encoder that utilizes a bipartite Graph Convolutional Neural network (GCN) combined with Neural Ordinary Differential Equations (Neural-ODEs)
We first show that the methodology is able to discover a tumor dynamic model that significantly improves upon an empirical model.
Our findings indicate that the methodology holds significant promise and offers potential applications in pre-clinical settings.
arXiv Detail & Related papers (2023-10-02T06:39:08Z) - Incomplete Multimodal Learning for Complex Brain Disorders Prediction [65.95783479249745]
We propose a new incomplete multimodal data integration approach that employs transformers and generative adversarial networks.
We apply our new method to predict cognitive degeneration and disease outcomes using the multimodal imaging genetic data from Alzheimer's Disease Neuroimaging Initiative cohort.
arXiv Detail & Related papers (2023-05-25T16:29:16Z) - Fusing Modalities by Multiplexed Graph Neural Networks for Outcome
Prediction in Tuberculosis [3.131872070347212]
We present a novel fusion framework using multiplexed graphs and derive a new graph neural network for learning from such graphs.
We present results that show that our proposed method outperforms state-of-the-art methods of fusing modalities for multi-outcome prediction on a large Tuberculosis (TB) dataset.
arXiv Detail & Related papers (2022-10-25T23:03:05Z) - Multi-modal Dynamic Graph Network: Coupling Structural and Functional
Connectome for Disease Diagnosis and Classification [8.67028273829113]
We propose a Multi-modal Dynamic Graph Convolution Network (MDGCN) for structural and functional brain network learning.
Our method benefits from modeling inter-modal representations and relating attentive multi-model associations into dynamic graphs.
arXiv Detail & Related papers (2022-10-25T02:41:32Z) - Ensemble manifold based regularized multi-modal graph convolutional
network for cognitive ability prediction [33.03449099154264]
Multi-modal functional magnetic resonance imaging (fMRI) can be used to make predictions about individual behavioral and cognitive traits based on brain connectivity networks.
We propose an interpretable multi-modal graph convolutional network (MGCN) model, incorporating the fMRI time series and the functional connectivity (FC) between each pair of brain regions.
We validate our MGCN model on the Philadelphia Neurodevelopmental Cohort to predict individual wide range achievement test (WRAT) score.
arXiv Detail & Related papers (2021-01-20T20:53:07Z) - Select-ProtoNet: Learning to Select for Few-Shot Disease Subtype
Prediction [55.94378672172967]
We focus on few-shot disease subtype prediction problem, identifying subgroups of similar patients.
We introduce meta learning techniques to develop a new model, which can extract the common experience or knowledge from interrelated clinical tasks.
Our new model is built upon a carefully designed meta-learner, called Prototypical Network, that is a simple yet effective meta learning machine for few-shot image classification.
arXiv Detail & Related papers (2020-09-02T02:50:30Z) - Modeling Shared Responses in Neuroimaging Studies through MultiView ICA [94.31804763196116]
Group studies involving large cohorts of subjects are important to draw general conclusions about brain functional organization.
We propose a novel MultiView Independent Component Analysis model for group studies, where data from each subject are modeled as a linear combination of shared independent sources plus noise.
We demonstrate the usefulness of our approach first on fMRI data, where our model demonstrates improved sensitivity in identifying common sources among subjects.
arXiv Detail & Related papers (2020-06-11T17:29:53Z) - M2Net: Multi-modal Multi-channel Network for Overall Survival Time
Prediction of Brain Tumor Patients [151.4352001822956]
Early and accurate prediction of overall survival (OS) time can help to obtain better treatment planning for brain tumor patients.
Existing prediction methods rely on radiomic features at the local lesion area of a magnetic resonance (MR) volume.
We propose an end-to-end OS time prediction model; namely, Multi-modal Multi-channel Network (M2Net)
arXiv Detail & Related papers (2020-06-01T05:21:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.