Improving Clinical Outcome Predictions Using Convolution over Medical
Entities with Multimodal Learning
- URL: http://arxiv.org/abs/2011.12349v2
- Date: Thu, 26 Nov 2020 09:40:41 GMT
- Title: Improving Clinical Outcome Predictions Using Convolution over Medical
Entities with Multimodal Learning
- Authors: Batuhan Bardak and Mehmet Tan
- Abstract summary: Early prediction of mortality and length of stay(LOS) of a patient is vital for saving a patient's life and management of hospital resources.
In this work, we extract medical entities from clinical notes and use them as additional features besides time-series features to improve our predictions.
We propose a convolution based multimodal architecture, which not only learns effectively combining medical entities and time-series ICU signals of patients.
- Score: 0.522145960878624
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Early prediction of mortality and length of stay(LOS) of a patient is vital
for saving a patient's life and management of hospital resources. Availability
of electronic health records(EHR) makes a huge impact on the healthcare domain
and there has seen several works on predicting clinical problems. However, many
studies did not benefit from the clinical notes because of the sparse, and high
dimensional nature. In this work, we extract medical entities from clinical
notes and use them as additional features besides time-series features to
improve our predictions. We propose a convolution based multimodal
architecture, which not only learns effectively combining medical entities and
time-series ICU signals of patients, but also allows us to compare the effect
of different embedding techniques such as Word2vec, FastText on medical
entities. In the experiments, our proposed method robustly outperforms all
other baseline models including different multimodal architectures for all
clinical tasks. The code for the proposed method is available at
https://github.com/tanlab/ConvolutionMedicalNer.
Related papers
- AI Hospital: Benchmarking Large Language Models in a Multi-agent Medical Interaction Simulator [69.51568871044454]
We introduce textbfAI Hospital, a framework simulating dynamic medical interactions between emphDoctor as player and NPCs.
This setup allows for realistic assessments of LLMs in clinical scenarios.
We develop the Multi-View Medical Evaluation benchmark, utilizing high-quality Chinese medical records and NPCs.
arXiv Detail & Related papers (2024-02-15T06:46:48Z) - XAI for In-hospital Mortality Prediction via Multimodal ICU Data [57.73357047856416]
We propose an efficient, explainable AI solution for predicting in-hospital mortality via multimodal ICU data.
We employ multimodal learning in our framework, which can receive heterogeneous inputs from clinical data and make decisions.
Our framework can be easily transferred to other clinical tasks, which facilitates the discovery of crucial factors in healthcare research.
arXiv Detail & Related papers (2023-12-29T14:28:04Z) - An Interpretable Deep-Learning Framework for Predicting Hospital
Readmissions From Electronic Health Records [2.156208381257605]
We propose a novel, interpretable deep-learning framework for predicting unplanned hospital readmissions.
We validate our system on the two predictive tasks of hospital readmission within 30 and 180 days, using real-world data.
arXiv Detail & Related papers (2023-10-16T08:48:52Z) - Hierarchical Pretraining for Biomedical Term Embeddings [4.69793648771741]
We propose HiPrBERT, a novel biomedical term representation model trained on hierarchical data.
We show that HiPrBERT effectively learns the pair-wise distance from hierarchical information, resulting in a substantially more informative embeddings for further biomedical applications.
arXiv Detail & Related papers (2023-07-01T08:16:00Z) - SPeC: A Soft Prompt-Based Calibration on Performance Variability of
Large Language Model in Clinical Notes Summarization [50.01382938451978]
We introduce a model-agnostic pipeline that employs soft prompts to diminish variance while preserving the advantages of prompt-based summarization.
Experimental findings indicate that our method not only bolsters performance but also effectively curbs variance for various language models.
arXiv Detail & Related papers (2023-03-23T04:47:46Z) - Time Associated Meta Learning for Clinical Prediction [78.99422473394029]
We propose a novel time associated meta learning (TAML) method to make effective predictions at multiple future time points.
To address the sparsity problem after task splitting, TAML employs a temporal information sharing strategy to augment the number of positive samples.
We demonstrate the effectiveness of TAML on multiple clinical datasets, where it consistently outperforms a range of strong baselines.
arXiv Detail & Related papers (2023-03-05T03:54:54Z) - Heterogeneous Graph Learning for Multi-modal Medical Data Analysis [6.3082663934391014]
We propose an effective graph-based framework called HetMed for fusing the multi-modal medical data.
HetMed captures the complex relationship between patients in a systematic way, which leads to more accurate clinical decisions.
arXiv Detail & Related papers (2022-11-28T09:14:36Z) - Modelling Patient Trajectories Using Multimodal Information [0.0]
We propose a solution to model patient trajectories that combines different types of information and considers the temporal aspect of clinical data.
The developed solution was evaluated on two different clinical outcomes, unexpected patient readmission and disease progression.
arXiv Detail & Related papers (2022-09-09T10:20:54Z) - How to Leverage Multimodal EHR Data for Better Medical Predictions? [13.401754962583771]
The complexity of electronic health records ( EHR) data is a challenge for the application of deep learning.
In this paper, we first extract the accompanying clinical notes from EHR and propose a method to integrate these data.
The results on two medical prediction tasks show that our fused model with different data outperforms the state-of-the-art method.
arXiv Detail & Related papers (2021-10-29T13:26:05Z) - Cross-Modal Information Maximization for Medical Imaging: CMIM [62.28852442561818]
In hospitals, data are siloed to specific information systems that make the same information available under different modalities.
This offers unique opportunities to obtain and use at train-time those multiple views of the same information that might not always be available at test-time.
We propose an innovative framework that makes the most of available data by learning good representations of a multi-modal input that are resilient to modality dropping at test-time.
arXiv Detail & Related papers (2020-10-20T20:05:35Z) - BiteNet: Bidirectional Temporal Encoder Network to Predict Medical
Outcomes [53.163089893876645]
We propose a novel self-attention mechanism that captures the contextual dependency and temporal relationships within a patient's healthcare journey.
An end-to-end bidirectional temporal encoder network (BiteNet) then learns representations of the patient's journeys.
We have evaluated the effectiveness of our methods on two supervised prediction and two unsupervised clustering tasks with a real-world EHR dataset.
arXiv Detail & Related papers (2020-09-24T00:42:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.