Phenotypical Ontology Driven Framework for Multi-Task Learning
- URL: http://arxiv.org/abs/2009.02188v1
- Date: Fri, 4 Sep 2020 13:46:07 GMT
- Title: Phenotypical Ontology Driven Framework for Multi-Task Learning
- Authors: Mohamed Ghalwash, Zijun Yao, Prithwish Chakraborty, James Codella,
Daby Sow
- Abstract summary: We propose OMTL, an Ontology-driven Multi-Task Learning framework.
It can effectively leverage knowledge from a well-established medical relationship graph (ontology) to construct a novel deep learning network architecture.
We demonstrate its efficacy on several real patient outcome predictions over state-of-the-art multi-task learning schemes.
- Score: 5.4507302335583345
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite the large number of patients in Electronic Health Records (EHRs), the
subset of usable data for modeling outcomes of specific phenotypes are often
imbalanced and of modest size. This can be attributed to the uneven coverage of
medical concepts in EHRs. In this paper, we propose OMTL, an Ontology-driven
Multi-Task Learning framework, that is designed to overcome such data
limitations. The key contribution of our work is the effective use of knowledge
from a predefined well-established medical relationship graph (ontology) to
construct a novel deep learning network architecture that mirrors this
ontology. It can effectively leverage knowledge from a well-established medical
relationship graph (ontology) by constructing a deep learning network
architecture that mirrors this graph. This enables common representations to be
shared across related phenotypes, and was found to improve the learning
performance. The proposed OMTL naturally allows for multitask learning of
different phenotypes on distinct predictive tasks. These phenotypes are tied
together by their semantic distance according to the external medical ontology.
Using the publicly available MIMIC-III database, we evaluate OMTL and
demonstrate its efficacy on several real patient outcome predictions over
state-of-the-art multi-task learning schemes.
Related papers
- Representation-Enhanced Neural Knowledge Integration with Application to Large-Scale Medical Ontology Learning [3.010503480024405]
We propose a theoretically guaranteed statistical framework, called RENKI, to enable simultaneous learning of relation types.
The proposed framework incorporates representation learning output into initial entity embedding of a neural network that approximates the score function for the knowledge graph.
We demonstrate the effect of weighting in the presence of heterogeneous relations and the benefit of incorporating representation learning in nonparametric models.
arXiv Detail & Related papers (2024-10-09T21:38:48Z) - LoRKD: Low-Rank Knowledge Decomposition for Medical Foundation Models [59.961172635689664]
"Knowledge Decomposition" aims to improve the performance on specific medical tasks.
We propose a novel framework named Low-Rank Knowledge Decomposition (LoRKD)
LoRKD explicitly separates gradients from different tasks by incorporating low-rank expert modules and efficient knowledge separation convolution.
arXiv Detail & Related papers (2024-09-29T03:56:21Z) - Interpretable Spatio-Temporal Embedding for Brain Structural-Effective Network with Ordinary Differential Equation [56.34634121544929]
In this study, we first construct the brain-effective network via the dynamic causal model.
We then introduce an interpretable graph learning framework termed Spatio-Temporal Embedding ODE (STE-ODE)
This framework incorporates specifically designed directed node embedding layers, aiming at capturing the dynamic interplay between structural and effective networks.
arXiv Detail & Related papers (2024-05-21T20:37:07Z) - Multi-modal Graph Learning over UMLS Knowledge Graphs [1.6311327256285293]
We propose a novel approach named Multi-Modal UMLS Graph Learning (MMUGL) for learning meaningful representations of medical concepts.
These representations are aggregated to represent entire patient visits and then fed into a sequence model to perform predictions at the granularity of multiple hospital visits of a patient.
arXiv Detail & Related papers (2023-07-10T10:16:57Z) - Hierarchical Pretraining for Biomedical Term Embeddings [4.69793648771741]
We propose HiPrBERT, a novel biomedical term representation model trained on hierarchical data.
We show that HiPrBERT effectively learns the pair-wise distance from hierarchical information, resulting in a substantially more informative embeddings for further biomedical applications.
arXiv Detail & Related papers (2023-07-01T08:16:00Z) - Time Associated Meta Learning for Clinical Prediction [78.99422473394029]
We propose a novel time associated meta learning (TAML) method to make effective predictions at multiple future time points.
To address the sparsity problem after task splitting, TAML employs a temporal information sharing strategy to augment the number of positive samples.
We demonstrate the effectiveness of TAML on multiple clinical datasets, where it consistently outperforms a range of strong baselines.
arXiv Detail & Related papers (2023-03-05T03:54:54Z) - Benchmarking Heterogeneous Treatment Effect Models through the Lens of
Interpretability [82.29775890542967]
Estimating personalized effects of treatments is a complex, yet pervasive problem.
Recent developments in the machine learning literature on heterogeneous treatment effect estimation gave rise to many sophisticated, but opaque, tools.
We use post-hoc feature importance methods to identify features that influence the model's predictions.
arXiv Detail & Related papers (2022-06-16T17:59:05Z) - Structured Multi-task Learning for Molecular Property Prediction [30.77287550003828]
We study multi-task learning for molecular property prediction in a novel setting, where a relation graph between tasks is available.
In the emphlatent space, we model the task representations by applying a state graph neural network (SGNN) on the relation graph.
In the emphoutput space, we employ structured prediction with the energy-based model (EBM), which can be efficiently trained through noise-contrastive estimation (NCE) approach.
arXiv Detail & Related papers (2022-02-22T20:31:23Z) - MIMO: Mutual Integration of Patient Journey and Medical Ontology for
Healthcare Representation Learning [49.57261599776167]
We propose an end-to-end robust Transformer-based solution, Mutual Integration of patient journey and Medical Ontology (MIMO) for healthcare representation learning and predictive analytics.
arXiv Detail & Related papers (2021-07-20T07:04:52Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - Ensemble manifold based regularized multi-modal graph convolutional
network for cognitive ability prediction [33.03449099154264]
Multi-modal functional magnetic resonance imaging (fMRI) can be used to make predictions about individual behavioral and cognitive traits based on brain connectivity networks.
We propose an interpretable multi-modal graph convolutional network (MGCN) model, incorporating the fMRI time series and the functional connectivity (FC) between each pair of brain regions.
We validate our MGCN model on the Philadelphia Neurodevelopmental Cohort to predict individual wide range achievement test (WRAT) score.
arXiv Detail & Related papers (2021-01-20T20:53:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.