PheME: A deep ensemble framework for improving phenotype prediction from
multi-modal data
- URL: http://arxiv.org/abs/2303.10794v2
- Date: Wed, 26 Apr 2023 20:40:43 GMT
- Title: PheME: A deep ensemble framework for improving phenotype prediction from
multi-modal data
- Authors: Shenghan Zhang, Haoxuan Li, Ruixiang Tang, Sirui Ding, Laila Rasmy,
Degui Zhi, Na Zou, Xia Hu
- Abstract summary: We present PheME, an Ensemble framework using Multi-modality data of structured EHRs and unstructured clinical notes for accurate Phenotype prediction.
We leverage ensemble learning to combine outputs from single-modal models and multi-modal models to improve phenotype predictions.
- Score: 42.56953523499849
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Detailed phenotype information is fundamental to accurate diagnosis and risk
estimation of diseases. As a rich source of phenotype information, electronic
health records (EHRs) promise to empower diagnostic variant interpretation.
However, how to accurately and efficiently extract phenotypes from the
heterogeneous EHR data remains a challenge. In this work, we present PheME, an
Ensemble framework using Multi-modality data of structured EHRs and
unstructured clinical notes for accurate Phenotype prediction. Firstly, we
employ multiple deep neural networks to learn reliable representations from the
sparse structured EHR data and redundant clinical notes. A multi-modal model
then aligns multi-modal features onto the same latent space to predict
phenotypes. Secondly, we leverage ensemble learning to combine outputs from
single-modal models and multi-modal models to improve phenotype predictions. We
choose seven diseases to evaluate the phenotyping performance of the proposed
framework. Experimental results show that using multi-modal data significantly
improves phenotype prediction in all diseases, the proposed ensemble learning
framework can further boost the performance.
Related papers
- Multimodal Clinical Trial Outcome Prediction with Large Language Models [30.201189349890267]
We propose a multimodal mixture-of-experts (LIFTED) approach for clinical trial outcome prediction.
LIFTED unifies different modality data by transforming them into natural language descriptions.
Then, LIFTED constructs unified noise-resilient encoders to extract information from modal-specific language descriptions.
arXiv Detail & Related papers (2024-02-09T16:18:38Z) - Debiasing Multimodal Models via Causal Information Minimization [65.23982806840182]
We study bias arising from confounders in a causal graph for multimodal data.
Robust predictive features contain diverse information that helps a model generalize to out-of-distribution data.
We use these features as confounder representations and use them via methods motivated by causal theory to remove bias from models.
arXiv Detail & Related papers (2023-11-28T16:46:14Z) - Ambiguous Medical Image Segmentation using Diffusion Models [60.378180265885945]
We introduce a single diffusion model-based approach that produces multiple plausible outputs by learning a distribution over group insights.
Our proposed model generates a distribution of segmentation masks by leveraging the inherent sampling process of diffusion.
Comprehensive results show that our proposed approach outperforms existing state-of-the-art ambiguous segmentation networks.
arXiv Detail & Related papers (2023-04-10T17:58:22Z) - Multi-Modal Perceiver Language Model for Outcome Prediction in Emergency
Department [0.03088120935391119]
We are interested in outcome prediction and patient triage in hospital emergency department based on text information in chief complaints and vital signs recorded at triage.
We adapt Perceiver - a modality-agnostic transformer-based model that has shown promising results in several applications.
In the experimental analysis, we show that mutli-modality improves the prediction performance compared with models trained solely on text or vital signs.
arXiv Detail & Related papers (2023-04-03T06:32:00Z) - Drug Synergistic Combinations Predictions via Large-Scale Pre-Training
and Graph Structure Learning [82.93806087715507]
Drug combination therapy is a well-established strategy for disease treatment with better effectiveness and less safety degradation.
Deep learning models have emerged as an efficient way to discover synergistic combinations.
Our framework achieves state-of-the-art results in comparison with other deep learning-based methods.
arXiv Detail & Related papers (2023-01-14T15:07:43Z) - Bayesian Networks for the robust and unbiased prediction of depression
and its symptoms utilizing speech and multimodal data [65.28160163774274]
We apply a Bayesian framework to capture the relationships between depression, depression symptoms, and features derived from speech, facial expression and cognitive game data collected at thymia.
arXiv Detail & Related papers (2022-11-09T14:48:13Z) - Unsupervised EHR-based Phenotyping via Matrix and Tensor Decompositions [0.6875312133832078]
We provide a comprehensive review of low-rank approximation-based approaches for computational phenotyping.
Recent developments have adapted low-rank data approximation methods by incorporating different constraints and regularizations that facilitate interpretability further.
arXiv Detail & Related papers (2022-09-01T09:47:27Z) - MOOMIN: Deep Molecular Omics Network for Anti-Cancer Drug Combination
Therapy [2.446672595462589]
We propose a multimodal graph neural network that can predict the synergistic effect of drug combinations for cancer treatment.
Our model captures the representation based on the context of drugs at multiple scales based on a drug-protein interaction network and metadata.
We demonstrate that the model makes high-quality predictions over a wide range of cancer cell line tissues.
arXiv Detail & Related papers (2021-10-28T13:10:25Z) - Hybrid deep learning methods for phenotype prediction from clinical
notes [4.866431869728018]
This paper proposes a novel hybrid model for automatically extracting patient phenotypes using natural language processing and deep learning models.
The proposed hybrid model is based on a neural bidirectional sequence model (BiLSTM or BiGRU) and a Convolutional Neural Network (CNN) for identifying patient's phenotypes in discharge reports.
arXiv Detail & Related papers (2021-08-16T05:57:28Z) - M2Net: Multi-modal Multi-channel Network for Overall Survival Time
Prediction of Brain Tumor Patients [151.4352001822956]
Early and accurate prediction of overall survival (OS) time can help to obtain better treatment planning for brain tumor patients.
Existing prediction methods rely on radiomic features at the local lesion area of a magnetic resonance (MR) volume.
We propose an end-to-end OS time prediction model; namely, Multi-modal Multi-channel Network (M2Net)
arXiv Detail & Related papers (2020-06-01T05:21:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.