REALM: RAG-Driven Enhancement of Multimodal Electronic Health Records
Analysis via Large Language Models
- URL: http://arxiv.org/abs/2402.07016v1
- Date: Sat, 10 Feb 2024 18:27:28 GMT
- Title: REALM: RAG-Driven Enhancement of Multimodal Electronic Health Records
Analysis via Large Language Models
- Authors: Yinghao Zhu, Changyu Ren, Shiyun Xie, Shukai Liu, Hangyuan Ji, Zixiang
Wang, Tao Sun, Long He, Zhoujun Li, Xi Zhu, Chengwei Pan
- Abstract summary: Existing models often lack the medical context relevent to clinical tasks, prompting the incorporation of external knowledge.
We propose REALM, a Retrieval-Augmented Generation (RAG) driven framework to enhance multimodal EHR representations.
Our experiments on MIMIC-III mortality and readmission tasks showcase the superior performance of our REALM framework over baselines.
- Score: 19.62552013839689
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The integration of multimodal Electronic Health Records (EHR) data has
significantly improved clinical predictive capabilities. Leveraging clinical
notes and multivariate time-series EHR, existing models often lack the medical
context relevent to clinical tasks, prompting the incorporation of external
knowledge, particularly from the knowledge graph (KG). Previous approaches with
KG knowledge have primarily focused on structured knowledge extraction,
neglecting unstructured data modalities and semantic high dimensional medical
knowledge. In response, we propose REALM, a Retrieval-Augmented Generation
(RAG) driven framework to enhance multimodal EHR representations that address
these limitations. Firstly, we apply Large Language Model (LLM) to encode long
context clinical notes and GRU model to encode time-series EHR data. Secondly,
we prompt LLM to extract task-relevant medical entities and match entities in
professionally labeled external knowledge graph (PrimeKG) with corresponding
medical knowledge. By matching and aligning with clinical standards, our
framework eliminates hallucinations and ensures consistency. Lastly, we propose
an adaptive multimodal fusion network to integrate extracted knowledge with
multimodal EHR data. Our extensive experiments on MIMIC-III mortality and
readmission tasks showcase the superior performance of our REALM framework over
baselines, emphasizing the effectiveness of each module. REALM framework
contributes to refining the use of multimodal EHR data in healthcare and
bridging the gap with nuanced medical context essential for informed clinical
predictions.
Related papers
- Large Language Model Benchmarks in Medical Tasks [11.196196955468992]
This paper presents a survey of various benchmark datasets employed in medical large language models (LLMs) tasks.
The survey categorizes the datasets by modality, discussing their significance, data structure, and impact on the development of LLMs.
The paper emphasizes the need for datasets with a greater degree of language diversity, structured omics data, and innovative approaches to synthesis.
arXiv Detail & Related papers (2024-10-28T11:07:33Z) - Reasoning-Enhanced Healthcare Predictions with Knowledge Graph Community Retrieval [61.70489848327436]
KARE is a novel framework that integrates knowledge graph (KG) community-level retrieval with large language models (LLMs) reasoning.
Extensive experiments demonstrate that KARE outperforms leading models by up to 10.8-15.0% on MIMIC-III and 12.6-12.7% on MIMIC-IV for mortality and readmission predictions.
arXiv Detail & Related papers (2024-10-06T18:46:28Z) - MedTsLLM: Leveraging LLMs for Multimodal Medical Time Series Analysis [6.30440420617113]
We introduce MedTsLLM, a general multimodal large language model (LLM) framework that integrates time series data and rich contextual information in the form of text to analyze physiological signals.
We perform three tasks with clinical relevance: semantic segmentation, boundary detection, and anomaly detection in time series.
Our model outperforms state-of-the-art baselines, including deep learning models, other LLMs, and clinical methods across multiple medical domains.
arXiv Detail & Related papers (2024-08-14T18:57:05Z) - GMAI-MMBench: A Comprehensive Multimodal Evaluation Benchmark Towards General Medical AI [67.09501109871351]
Large Vision-Language Models (LVLMs) are capable of handling diverse data types such as imaging, text, and physiological signals.
GMAI-MMBench is the most comprehensive general medical AI benchmark with well-categorized data structure and multi-perceptual granularity to date.
It is constructed from 284 datasets across 38 medical image modalities, 18 clinical-related tasks, 18 departments, and 4 perceptual granularities in a Visual Question Answering (VQA) format.
arXiv Detail & Related papers (2024-08-06T17:59:21Z) - medIKAL: Integrating Knowledge Graphs as Assistants of LLMs for Enhanced Clinical Diagnosis on EMRs [13.806201934732321]
medIKAL combines Large Language Models (LLMs) with knowledge graphs (KGs) to enhance diagnostic capabilities.
medIKAL assigns weighted importance to entities in medical records based on their type, enabling precise localization of candidate diseases within KGs.
We validated medIKAL's effectiveness through extensive experiments on a newly introduced open-sourced Chinese EMR dataset.
arXiv Detail & Related papers (2024-06-20T13:56:52Z) - EMERGE: Integrating RAG for Improved Multimodal EHR Predictive Modeling [22.94521527609479]
EMERGE is a Retrieval-Augmented Generation driven framework aimed at enhancing multimodal EHR predictive modeling.
Our approach extracts entities from both time-series data and clinical notes by prompting Large Language Models.
The extracted knowledge is then used to generate task-relevant summaries of patients' health statuses.
arXiv Detail & Related papers (2024-05-27T10:53:15Z) - AI Hospital: Benchmarking Large Language Models in a Multi-agent Medical Interaction Simulator [69.51568871044454]
We introduce textbfAI Hospital, a framework simulating dynamic medical interactions between emphDoctor as player and NPCs.
This setup allows for realistic assessments of LLMs in clinical scenarios.
We develop the Multi-View Medical Evaluation benchmark, utilizing high-quality Chinese medical records and NPCs.
arXiv Detail & Related papers (2024-02-15T06:46:48Z) - Next Visit Diagnosis Prediction via Medical Code-Centric Multimodal Contrastive EHR Modelling with Hierarchical Regularisation [0.0]
We propose NECHO, a novel medical code-centric multimodal contrastive EHR learning framework with hierarchical regularisation.
First, we integrate multifaceted information encompassing medical codes, demographics, and clinical notes using a tailored network design.
We also regularise modality-specific encoders using a parental level information in medical ontology to learn hierarchical structure of EHR data.
arXiv Detail & Related papers (2024-01-22T01:58:32Z) - Learnable Weight Initialization for Volumetric Medical Image Segmentation [66.3030435676252]
We propose a learnable weight-based hybrid medical image segmentation approach.
Our approach is easy to integrate into any hybrid model and requires no external training data.
Experiments on multi-organ and lung cancer segmentation tasks demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2023-06-15T17:55:05Z) - Towards Medical Artificial General Intelligence via Knowledge-Enhanced
Multimodal Pretraining [121.89793208683625]
Medical artificial general intelligence (MAGI) enables one foundation model to solve different medical tasks.
We propose a new paradigm called Medical-knedge-enhanced mulTimOdal pretRaining (MOTOR)
arXiv Detail & Related papers (2023-04-26T01:26:19Z) - Cross-Modal Information Maximization for Medical Imaging: CMIM [62.28852442561818]
In hospitals, data are siloed to specific information systems that make the same information available under different modalities.
This offers unique opportunities to obtain and use at train-time those multiple views of the same information that might not always be available at test-time.
We propose an innovative framework that makes the most of available data by learning good representations of a multi-modal input that are resilient to modality dropping at test-time.
arXiv Detail & Related papers (2020-10-20T20:05:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.