Multi-modal Graph Learning for Disease Prediction
- URL: http://arxiv.org/abs/2203.05880v1
- Date: Fri, 11 Mar 2022 12:33:20 GMT
- Title: Multi-modal Graph Learning for Disease Prediction
- Authors: Shuai Zheng, Zhenfeng Zhu, Zhizhe Liu, Zhenyu Guo, Yang Liu, Yuchen
Yang, and Yao Zhao
- Abstract summary: We propose an end-to-end Multi-modal Graph Learning framework (MMGL) for disease prediction with multi-modality.
Instead of defining the graph manually, the latent graph structure is captured through an effective way of adaptive graph learning.
An extensive group of experiments on two disease prediction tasks demonstrates that the proposed MMGL achieves more favorable performance.
- Score: 35.156975779372836
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Benefiting from the powerful expressive capability of graphs, graph-based
approaches have been popularly applied to handle multi-modal medical data and
achieved impressive performance in various biomedical applications. For disease
prediction tasks, most existing graph-based methods tend to define the graph
manually based on specified modality (e.g., demographic information), and then
integrated other modalities to obtain the patient representation by Graph
Representation Learning (GRL). However, constructing an appropriate graph in
advance is not a simple matter for these methods. Meanwhile, the complex
correlation between modalities is ignored. These factors inevitably yield the
inadequacy of providing sufficient information about the patient's condition
for a reliable diagnosis. To this end, we propose an end-to-end Multi-modal
Graph Learning framework (MMGL) for disease prediction with multi-modality. To
effectively exploit the rich information across multi-modality associated with
the disease, modality-aware representation learning is proposed to aggregate
the features of each modality by leveraging the correlation and complementarity
between the modalities. Furthermore, instead of defining the graph manually,
the latent graph structure is captured through an effective way of adaptive
graph learning. It could be jointly optimized with the prediction model, thus
revealing the intrinsic connections among samples. Our model is also applicable
to the scenario of inductive learning for those unseen data. An extensive group
of experiments on two disease prediction tasks demonstrates that the proposed
MMGL achieves more favorable performance. The code of MMGL is available at
\url{https://github.com/SsGood/MMGL}.
Related papers
- A Survey of Deep Graph Learning under Distribution Shifts: from Graph Out-of-Distribution Generalization to Adaptation [59.14165404728197]
We provide an up-to-date and forward-looking review of deep graph learning under distribution shifts.
Specifically, we cover three primary scenarios: graph OOD generalization, training-time graph OOD adaptation, and test-time graph OOD adaptation.
To provide a better understanding of the literature, we systematically categorize the existing models based on our proposed taxonomy.
arXiv Detail & Related papers (2024-10-25T02:39:56Z) - GTP-4o: Modality-prompted Heterogeneous Graph Learning for Omni-modal Biomedical Representation [68.63955715643974]
Modality-prompted Heterogeneous Graph for Omnimodal Learning (GTP-4o)
We propose an innovative Modality-prompted Heterogeneous Graph for Omnimodal Learning (GTP-4o)
arXiv Detail & Related papers (2024-07-08T01:06:13Z) - MM-GTUNets: Unified Multi-Modal Graph Deep Learning for Brain Disorders Prediction [8.592259720470697]
We propose MM-GTUNets, an end-to-end graph transformer based multi-modal graph deep learning framework for brain disorders prediction.
We introduce Modality Reward Representation Learning (MRRL) which adaptively constructs population graphs using a reward system.
We also propose Adaptive Cross-Modal Graph Learning (ACMGL), which captures critical modality-specific and modality-shared features.
arXiv Detail & Related papers (2024-06-20T16:14:43Z) - Multi-modal Multi-kernel Graph Learning for Autism Prediction and
Biomarker Discovery [29.790200009136825]
We propose a novel method to offset the negative impact between modalities in the process of multi-modal integration and extract heterogeneous information from graphs.
Our method is evaluated on the benchmark Autism Brain Imaging Data Exchange (ABIDE) dataset and outperforms the state-of-the-art methods.
In addition, discriminative brain regions associated with autism are identified by our model, providing guidance for the study of autism pathology.
arXiv Detail & Related papers (2023-03-03T07:09:17Z) - Graph Contrastive Learning for Multi-omics Data [0.0]
We present a learnining framework named Multi-Omics Graph Contrastive Learner(MOGCL)
We show that pre-training graph models with a contrastive methodology along with fine-tuning it in a supervised manner is an efficient strategy for multi-omics data classification.
arXiv Detail & Related papers (2023-01-03T10:03:08Z) - Graph-in-Graph (GiG): Learning interpretable latent graphs in
non-Euclidean domain for biological and healthcare applications [52.65389473899139]
Graphs are a powerful tool for representing and analyzing unstructured, non-Euclidean data ubiquitous in the healthcare domain.
Recent works have shown that considering relationships between input data samples have a positive regularizing effect for the downstream task.
We propose Graph-in-Graph (GiG), a neural network architecture for protein classification and brain imaging applications.
arXiv Detail & Related papers (2022-04-01T10:01:37Z) - Graph Self-supervised Learning with Accurate Discrepancy Learning [64.69095775258164]
We propose a framework that aims to learn the exact discrepancy between the original and the perturbed graphs, coined as Discrepancy-based Self-supervised LeArning (D-SLA)
We validate our method on various graph-related downstream tasks, including molecular property prediction, protein function prediction, and link prediction tasks, on which our model largely outperforms relevant baselines.
arXiv Detail & Related papers (2022-02-07T08:04:59Z) - Multi-modal Graph Learning for Disease Prediction [35.4310911850558]
We propose an end-to-end Multimodal Graph Learning framework (MMGL) for disease prediction.
Instead of defining the adjacency matrix manually as existing methods, the latent graph structure can be captured through a novel way of adaptive graph learning.
arXiv Detail & Related papers (2021-07-01T03:59:22Z) - GKD: Semi-supervised Graph Knowledge Distillation for Graph-Independent
Inference [41.348451615460796]
We propose a novel semi-supervised approach named GKD based on knowledge distillation.
We perform experiments on two public datasets for diagnosing Autism spectrum disorder, and Alzheimer's disease.
According to these experiments, GKD outperforms the previous graph-based deep learning methods in terms of accuracy, AUC, and Macro F1.
arXiv Detail & Related papers (2021-04-08T08:23:37Z) - Multilayer Clustered Graph Learning [66.94201299553336]
We use contrastive loss as a data fidelity term, in order to properly aggregate the observed layers into a representative graph.
Experiments show that our method leads to a clustered clusters w.r.t.
We learn a clustering algorithm for solving clustering problems.
arXiv Detail & Related papers (2020-10-29T09:58:02Z) - Dynamic Graph Correlation Learning for Disease Diagnosis with Incomplete
Labels [66.57101219176275]
Disease diagnosis on chest X-ray images is a challenging multi-label classification task.
We propose a Disease Diagnosis Graph Convolutional Network (DD-GCN) that presents a novel view of investigating the inter-dependency among different diseases.
Our method is the first to build a graph over the feature maps with a dynamic adjacency matrix for correlation learning.
arXiv Detail & Related papers (2020-02-26T17:10:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.