EHR Interaction Between Patients and AI: NoteAid EHR Interaction
- URL: http://arxiv.org/abs/2312.17475v1
- Date: Fri, 29 Dec 2023 05:13:40 GMT
- Title: EHR Interaction Between Patients and AI: NoteAid EHR Interaction
- Authors: Xiaocheng Zhang, Zonghai Yao, Hong Yu
- Abstract summary: This paper introduces the NoteAid EHR Interaction Pipeline, an innovative approach developed using generative LLMs to assist in patient education.
We extract datasets containing 10,000 instances from MIMIC Discharge Summaries and 876 instances from the MADE medical notes collection, respectively, executing the two tasks through the NoteAid EHR Interaction Pipeline.
Through a comprehensive evaluation of the entire dataset using LLM assessment and a rigorous manual evaluation of 64 instances, we showcase the potential of LLMs in patient education.
- Score: 7.880641398866267
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the rapid advancement of Large Language Models (LLMs) and their
outstanding performance in semantic and contextual comprehension, the potential
of LLMs in specialized domains warrants exploration. This paper introduces the
NoteAid EHR Interaction Pipeline, an innovative approach developed using
generative LLMs to assist in patient education, a task stemming from the need
to aid patients in understanding Electronic Health Records (EHRs). Building
upon the NoteAid work, we designed two novel tasks from the patient's
perspective: providing explanations for EHR content that patients may not
understand and answering questions posed by patients after reading their EHRs.
We extracted datasets containing 10,000 instances from MIMIC Discharge
Summaries and 876 instances from the MADE medical notes collection,
respectively, executing the two tasks through the NoteAid EHR Interaction
Pipeline with these data. Performance data of LLMs on these tasks were
collected and constructed as the corresponding NoteAid EHR Interaction Dataset.
Through a comprehensive evaluation of the entire dataset using LLM assessment
and a rigorous manual evaluation of 64 instances, we showcase the potential of
LLMs in patient education. Besides, the results provide valuable data support
for future exploration and applications in this domain while also supplying
high-quality synthetic datasets for in-house system training.
Related papers
- Enhanced Electronic Health Records Text Summarization Using Large Language Models [0.0]
This project builds on prior work by creating a system that generates clinician-preferred, focused summaries.
The proposed system leverages the Flan-T5 model to generate tailored EHR summaries based on clinician-specified topics.
arXiv Detail & Related papers (2024-10-12T19:36:41Z) - HERM: Benchmarking and Enhancing Multimodal LLMs for Human-Centric Understanding [68.4046326104724]
We introduce HERM-Bench, a benchmark for evaluating the human-centric understanding capabilities of MLLMs.
Our work reveals the limitations of existing MLLMs in understanding complex human-centric scenarios.
We present HERM-100K, a comprehensive dataset with multi-level human-centric annotations, aimed at enhancing MLLMs' training.
arXiv Detail & Related papers (2024-10-09T11:14:07Z) - Dr-LLaVA: Visual Instruction Tuning with Symbolic Clinical Grounding [53.629132242389716]
Vision-Language Models (VLM) can support clinicians by analyzing medical images and engaging in natural language interactions.
VLMs often exhibit "hallucinogenic" behavior, generating textual outputs not grounded in contextual multimodal information.
We propose a new alignment algorithm that uses symbolic representations of clinical reasoning to ground VLMs in medical knowledge.
arXiv Detail & Related papers (2024-05-29T23:19:28Z) - README: Bridging Medical Jargon and Lay Understanding for Patient Education through Data-Centric NLP [9.432205523734707]
We introduce a new task of automatically generating lay definitions, aiming to simplify medical terms into patient-friendly lay language.
We first created the dataset, an extensive collection of over 50,000 unique (medical term, lay definition) pairs and 300,000 mentions.
We have also engineered a data-centric Human-AI pipeline that synergizes data filtering, augmentation, and selection to improve data quality.
arXiv Detail & Related papers (2023-12-24T23:01:00Z) - EHRTutor: Enhancing Patient Understanding of Discharge Instructions [11.343429138567572]
This paper presents EHRTutor, an innovative multi-component framework leveraging the Large Language Model (LLM) for patient education through conversational question-answering.
It first formulates questions pertaining to the electronic health record discharge instructions.
It then educates the patient through conversation by administering each question as a test. Finally, it generates a summary at the end of the conversation.
arXiv Detail & Related papers (2023-10-30T00:46:03Z) - Retrieving Evidence from EHRs with LLMs: Possibilities and Challenges [18.56314471146199]
Large volume of notes often associated with patients together with time constraints renders manually identifying relevant evidence practically infeasible.
We propose and evaluate a zero-shot strategy for using LLMs as a mechanism to efficiently retrieve and summarize unstructured evidence in patient EHR.
arXiv Detail & Related papers (2023-09-08T18:44:47Z) - MedAlign: A Clinician-Generated Dataset for Instruction Following with
Electronic Medical Records [60.35217378132709]
Large language models (LLMs) can follow natural language instructions with human-level fluency.
evaluating LLMs on realistic text generation tasks for healthcare remains challenging.
We introduce MedAlign, a benchmark dataset of 983 natural language instructions for EHR data.
arXiv Detail & Related papers (2023-08-27T12:24:39Z) - PULSAR: Pre-training with Extracted Healthcare Terms for Summarising
Patients' Problems and Data Augmentation with Black-box Large Language Models [25.363775123262307]
Automatic summarisation of a patient's problems in the form of a problem list can aid stakeholders in understanding a patient's condition, reducing workload and cognitive bias.
BioNLP 2023 Shared Task 1A focuses on generating a list of diagnoses and problems from the provider's progress notes during hospitalisation.
One component employs large language models (LLMs) for data augmentation; the other is an abstractive summarisation LLM with a novel pre-training objective for generating the patients' problems summarised as a list.
Our approach was ranked second among all submissions to the shared task.
arXiv Detail & Related papers (2023-06-05T10:17:50Z) - Large Language Models for Healthcare Data Augmentation: An Example on
Patient-Trial Matching [49.78442796596806]
We propose an innovative privacy-aware data augmentation approach for patient-trial matching (LLM-PTM)
Our experiments demonstrate a 7.32% average improvement in performance using the proposed LLM-PTM method, and the generalizability to new data is improved by 12.12%.
arXiv Detail & Related papers (2023-03-24T03:14:00Z) - Retrieval-Augmented and Knowledge-Grounded Language Models for Faithful Clinical Medicine [68.7814360102644]
We propose the Re$3$Writer method with retrieval-augmented generation and knowledge-grounded reasoning.
We demonstrate the effectiveness of our method in generating patient discharge instructions.
arXiv Detail & Related papers (2022-10-23T16:34:39Z) - BiteNet: Bidirectional Temporal Encoder Network to Predict Medical
Outcomes [53.163089893876645]
We propose a novel self-attention mechanism that captures the contextual dependency and temporal relationships within a patient's healthcare journey.
An end-to-end bidirectional temporal encoder network (BiteNet) then learns representations of the patient's journeys.
We have evaluated the effectiveness of our methods on two supervised prediction and two unsupervised clustering tasks with a real-world EHR dataset.
arXiv Detail & Related papers (2020-09-24T00:42:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.