MeSIN: Multilevel Selective and Interactive Network for Medication
Recommendation
- URL: http://arxiv.org/abs/2104.11026v1
- Date: Thu, 22 Apr 2021 12:59:50 GMT
- Title: MeSIN: Multilevel Selective and Interactive Network for Medication
Recommendation
- Authors: Yang An and Liang Zhang and Mao You and Xueqing Tian and Bo Jin and
Xiaopeng Wei
- Abstract summary: We propose a multilevel selective and interactive network (MeSIN) for medication recommendation.
First, an attentional selective module (ASM) is applied to assign flexible attention scores to different medical codes embeddings.
Second, we incorporate a novel interactive long-short term memory network (InLSTM) to reinforce the interactions of multilevel medical sequences in EHR data.
- Score: 9.173903754083927
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recommending medications for patients using electronic health records (EHRs)
is a crucial data mining task for an intelligent healthcare system. It can
assist doctors in making clinical decisions more efficiently. However, the
inherent complexity of the EHR data renders it as a challenging task: (1)
Multilevel structures: the EHR data typically contains multilevel structures
which are closely related with the decision-making pathways, e.g., laboratory
results lead to disease diagnoses, and then contribute to the prescribed
medications; (2) Multiple sequences interactions: multiple sequences in EHR
data are usually closely correlated with each other; (3) Abundant noise: lots
of task-unrelated features or noise information within EHR data generally
result in suboptimal performance. To tackle the above challenges, we propose a
multilevel selective and interactive network (MeSIN) for medication
recommendation. Specifically, MeSIN is designed with three components. First,
an attentional selective module (ASM) is applied to assign flexible attention
scores to different medical codes embeddings by their relevance to the
recommended medications in every admission. Second, we incorporate a novel
interactive long-short term memory network (InLSTM) to reinforce the
interactions of multilevel medical sequences in EHR data with the help of the
calibrated memory-augmented cell and an enhanced input gate. Finally, we employ
a global selective fusion module (GSFM) to infuse the multi-sourced information
embeddings into final patient representations for medications recommendation.
To validate our method, extensive experiments have been conducted on a
real-world clinical dataset. The results demonstrate a consistent superiority
of our framework over several baselines and testify the effectiveness of our
proposed approach.
Related papers
- Towards Evaluating and Building Versatile Large Language Models for Medicine [57.49547766838095]
We present MedS-Bench, a benchmark designed to evaluate the performance of large language models (LLMs) in clinical contexts.
MedS-Bench spans 11 high-level clinical tasks, including clinical report summarization, treatment recommendations, diagnosis, named entity recognition, and medical concept explanation.
MedS-Ins comprises 58 medically oriented language corpora, totaling 13.5 million samples across 122 tasks.
arXiv Detail & Related papers (2024-08-22T17:01:34Z) - EMERGE: Integrating RAG for Improved Multimodal EHR Predictive Modeling [22.94521527609479]
EMERGE is a Retrieval-Augmented Generation driven framework aimed at enhancing multimodal EHR predictive modeling.
Our approach extracts entities from both time-series data and clinical notes by prompting Large Language Models.
The extracted knowledge is then used to generate task-relevant summaries of patients' health statuses.
arXiv Detail & Related papers (2024-05-27T10:53:15Z) - Multimodal Fusion of EHR in Structures and Semantics: Integrating Clinical Records and Notes with Hypergraph and LLM [39.25272553560425]
We propose a new framework called MINGLE, which integrates both structures and semantics in EHR effectively.
Our framework uses a two-level infusion strategy to combine medical concept semantics and clinical note semantics into hypergraph neural networks.
Experiment results on two EHR datasets, the public MIMIC-III and private CRADLE, show that MINGLE can effectively improve predictive performance by 11.83% relatively.
arXiv Detail & Related papers (2024-02-19T23:48:40Z) - AI Hospital: Benchmarking Large Language Models in a Multi-agent Medical Interaction Simulator [69.51568871044454]
We introduce textbfAI Hospital, a framework simulating dynamic medical interactions between emphDoctor as player and NPCs.
This setup allows for realistic assessments of LLMs in clinical scenarios.
We develop the Multi-View Medical Evaluation benchmark, utilizing high-quality Chinese medical records and NPCs.
arXiv Detail & Related papers (2024-02-15T06:46:48Z) - REALM: RAG-Driven Enhancement of Multimodal Electronic Health Records
Analysis via Large Language Models [19.62552013839689]
Existing models often lack the medical context relevent to clinical tasks, prompting the incorporation of external knowledge.
We propose REALM, a Retrieval-Augmented Generation (RAG) driven framework to enhance multimodal EHR representations.
Our experiments on MIMIC-III mortality and readmission tasks showcase the superior performance of our REALM framework over baselines.
arXiv Detail & Related papers (2024-02-10T18:27:28Z) - Next Visit Diagnosis Prediction via Medical Code-Centric Multimodal Contrastive EHR Modelling with Hierarchical Regularisation [0.0]
We propose NECHO, a novel medical code-centric multimodal contrastive EHR learning framework with hierarchical regularisation.
First, we integrate multifaceted information encompassing medical codes, demographics, and clinical notes using a tailored network design.
We also regularise modality-specific encoders using a parental level information in medical ontology to learn hierarchical structure of EHR data.
arXiv Detail & Related papers (2024-01-22T01:58:32Z) - XAI for In-hospital Mortality Prediction via Multimodal ICU Data [57.73357047856416]
We propose an efficient, explainable AI solution for predicting in-hospital mortality via multimodal ICU data.
We employ multimodal learning in our framework, which can receive heterogeneous inputs from clinical data and make decisions.
Our framework can be easily transferred to other clinical tasks, which facilitates the discovery of crucial factors in healthcare research.
arXiv Detail & Related papers (2023-12-29T14:28:04Z) - INSPECT: A Multimodal Dataset for Pulmonary Embolism Diagnosis and
Prognosis [19.32686665459374]
We introduce INSPECT, which contains de-identified longitudinal records from a large cohort of patients at risk for pulmonary embolism (PE)
INSPECT contains data from 19,402 patients, including CT images, radiology report impression sections, and structured electronic health record (EHR) data (i.e. demographics, diagnoses, procedures, vitals, and medications)
arXiv Detail & Related papers (2023-11-17T07:28:16Z) - Towards Medical Artificial General Intelligence via Knowledge-Enhanced
Multimodal Pretraining [121.89793208683625]
Medical artificial general intelligence (MAGI) enables one foundation model to solve different medical tasks.
We propose a new paradigm called Medical-knedge-enhanced mulTimOdal pretRaining (MOTOR)
arXiv Detail & Related papers (2023-04-26T01:26:19Z) - SPeC: A Soft Prompt-Based Calibration on Performance Variability of
Large Language Model in Clinical Notes Summarization [50.01382938451978]
We introduce a model-agnostic pipeline that employs soft prompts to diminish variance while preserving the advantages of prompt-based summarization.
Experimental findings indicate that our method not only bolsters performance but also effectively curbs variance for various language models.
arXiv Detail & Related papers (2023-03-23T04:47:46Z) - BiteNet: Bidirectional Temporal Encoder Network to Predict Medical
Outcomes [53.163089893876645]
We propose a novel self-attention mechanism that captures the contextual dependency and temporal relationships within a patient's healthcare journey.
An end-to-end bidirectional temporal encoder network (BiteNet) then learns representations of the patient's journeys.
We have evaluated the effectiveness of our methods on two supervised prediction and two unsupervised clustering tasks with a real-world EHR dataset.
arXiv Detail & Related papers (2020-09-24T00:42:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.