Multi-modal Graph Learning over UMLS Knowledge Graphs
- URL: http://arxiv.org/abs/2307.04461v2
- Date: Thu, 9 Nov 2023 15:30:12 GMT
- Title: Multi-modal Graph Learning over UMLS Knowledge Graphs
- Authors: Manuel Burger, Gunnar R\"atsch, Rita Kuznetsova
- Abstract summary: We propose a novel approach named Multi-Modal UMLS Graph Learning (MMUGL) for learning meaningful representations of medical concepts.
These representations are aggregated to represent entire patient visits and then fed into a sequence model to perform predictions at the granularity of multiple hospital visits of a patient.
- Score: 1.6311327256285293
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Clinicians are increasingly looking towards machine learning to gain insights
about patient evolutions. We propose a novel approach named Multi-Modal UMLS
Graph Learning (MMUGL) for learning meaningful representations of medical
concepts using graph neural networks over knowledge graphs based on the unified
medical language system. These representations are aggregated to represent
entire patient visits and then fed into a sequence model to perform predictions
at the granularity of multiple hospital visits of a patient. We improve
performance by incorporating prior medical knowledge and considering multiple
modalities. We compare our method to existing architectures proposed to learn
representations at different granularities on the MIMIC-III dataset and show
that our approach outperforms these methods. The results demonstrate the
significance of multi-modal medical concept representations based on prior
medical knowledge.
Related papers
- MPLite: Multi-Aspect Pretraining for Mining Clinical Health Records [13.4100093553808]
We present a novel framework MPLite that utilizes Multi-aspect Pretraining with Lab results through a light-weight neural network to enhance medical concept representation.
We design a pretraining module that predicts medical codes based on lab results, ensuring robust prediction by fusing multiple aspects of features.
arXiv Detail & Related papers (2024-11-17T19:43:10Z) - Automated Ensemble Multimodal Machine Learning for Healthcare [52.500923923797835]
We introduce a multimodal framework, AutoPrognosis-M, that enables the integration of structured clinical (tabular) data and medical imaging using automated machine learning.
AutoPrognosis-M incorporates 17 imaging models, including convolutional neural networks and vision transformers, and three distinct multimodal fusion strategies.
arXiv Detail & Related papers (2024-07-25T17:46:38Z) - MLIP: Enhancing Medical Visual Representation with Divergence Encoder
and Knowledge-guided Contrastive Learning [48.97640824497327]
We propose a novel framework leveraging domain-specific medical knowledge as guiding signals to integrate language information into the visual domain through image-text contrastive learning.
Our model includes global contrastive learning with our designed divergence encoder, local token-knowledge-patch alignment contrastive learning, and knowledge-guided category-level contrastive learning with expert knowledge.
Notably, MLIP surpasses state-of-the-art methods even with limited annotated data, highlighting the potential of multimodal pre-training in advancing medical representation learning.
arXiv Detail & Related papers (2024-02-03T05:48:50Z) - MaxCorrMGNN: A Multi-Graph Neural Network Framework for Generalized
Multimodal Fusion of Medical Data for Outcome Prediction [3.2889220522843625]
We develop an innovative fusion approach called MaxCorr MGNN that models non-linear modality correlations within and across patients.
We then design, for the first time, a generalized multi-layered graph neural network (MGNN) for task-informed reasoning in multi-layered graphs.
We evaluate our model an outcome prediction task on a Tuberculosis dataset consistently outperforming several state-of-the-art neural, graph-based and traditional fusion techniques.
arXiv Detail & Related papers (2023-07-13T23:52:41Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - MD-Manifold: A Medical-Distance-Based Representation Learning Approach
for Medical Concept and Patient Representation [6.795388490479779]
Representing medical concepts for healthcare analytical tasks requires incorporating medical domain knowledge and prior data information.
Our proposed framework, MD-Manifold, introduces a novel approach to medical concept and patient representation.
It includes a new data augmentation approach, concept distance metric, and patient-patient network to incorporate crucial medical domain knowledge and prior data information.
arXiv Detail & Related papers (2023-04-30T18:58:32Z) - Understanding the Tricks of Deep Learning in Medical Image Segmentation:
Challenges and Future Directions [66.40971096248946]
In this paper, we collect a series of MedISeg tricks for different model implementation phases.
We experimentally explore the effectiveness of these tricks on consistent baselines.
We also open-sourced a strong MedISeg repository, where each component has the advantage of plug-and-play.
arXiv Detail & Related papers (2022-09-21T12:30:05Z) - MIMO: Mutual Integration of Patient Journey and Medical Ontology for
Healthcare Representation Learning [49.57261599776167]
We propose an end-to-end robust Transformer-based solution, Mutual Integration of patient journey and Medical Ontology (MIMO) for healthcare representation learning and predictive analytics.
arXiv Detail & Related papers (2021-07-20T07:04:52Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - Ensemble manifold based regularized multi-modal graph convolutional
network for cognitive ability prediction [33.03449099154264]
Multi-modal functional magnetic resonance imaging (fMRI) can be used to make predictions about individual behavioral and cognitive traits based on brain connectivity networks.
We propose an interpretable multi-modal graph convolutional network (MGCN) model, incorporating the fMRI time series and the functional connectivity (FC) between each pair of brain regions.
We validate our MGCN model on the Philadelphia Neurodevelopmental Cohort to predict individual wide range achievement test (WRAT) score.
arXiv Detail & Related papers (2021-01-20T20:53:07Z) - Phenotypical Ontology Driven Framework for Multi-Task Learning [5.4507302335583345]
We propose OMTL, an Ontology-driven Multi-Task Learning framework.
It can effectively leverage knowledge from a well-established medical relationship graph (ontology) to construct a novel deep learning network architecture.
We demonstrate its efficacy on several real patient outcome predictions over state-of-the-art multi-task learning schemes.
arXiv Detail & Related papers (2020-09-04T13:46:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.