LLM on FHIR -- Demystifying Health Records
- URL: http://arxiv.org/abs/2402.01711v1
- Date: Thu, 25 Jan 2024 17:45:34 GMT
- Title: LLM on FHIR -- Demystifying Health Records
- Authors: Paul Schmiedmayer, Adrit Rao, Philipp Zagar, Vishnu Ravi, Aydin
Zahedivash, Arash Fereydooni, Oliver Aalami
- Abstract summary: This study developed an app allowing users to interact with their health records using large language models (LLMs)
The app effectively translated medical data into patient-friendly language and was able to adapt its responses to different patient profiles.
- Score: 0.32985979395737786
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Objective: To enhance health literacy and accessibility of health information
for a diverse patient population by developing a patient-centered artificial
intelligence (AI) solution using large language models (LLMs) and Fast
Healthcare Interoperability Resources (FHIR) application programming interfaces
(APIs). Materials and Methods: The research involved developing LLM on FHIR, an
open-source mobile application allowing users to interact with their health
records using LLMs. The app is built on Stanford's Spezi ecosystem and uses
OpenAI's GPT-4. A pilot study was conducted with the SyntheticMass patient
dataset and evaluated by medical experts to assess the app's effectiveness in
increasing health literacy. The evaluation focused on the accuracy, relevance,
and understandability of the LLM's responses to common patient questions.
Results: LLM on FHIR demonstrated varying but generally high degrees of
accuracy and relevance in providing understandable health information to
patients. The app effectively translated medical data into patient-friendly
language and was able to adapt its responses to different patient profiles.
However, challenges included variability in LLM responses and the need for
precise filtering of health data. Discussion and Conclusion: LLMs offer
significant potential in improving health literacy and making health records
more accessible. LLM on FHIR, as a pioneering application in this field,
demonstrates the feasibility and challenges of integrating LLMs into patient
care. While promising, the implementation and pilot also highlight risks such
as inconsistent responses and the importance of replicable output. Future
directions include better resource identification mechanisms and executing LLMs
on-device to enhance privacy and reduce costs.
Related papers
- PALLM: Evaluating and Enhancing PALLiative Care Conversations with Large Language Models [10.258261180305439]
Large language models (LLMs) offer a new approach to assessing complex communication metrics.
LLMs offer the potential to advance the field through integration into passive sensing and just-in-time intervention systems.
This study explores LLMs as evaluators of palliative care communication quality, leveraging their linguistic, in-context learning, and reasoning capabilities.
arXiv Detail & Related papers (2024-09-23T16:39:12Z) - IntelliCare: Improving Healthcare Analysis with Variance-Controlled Patient-Level Knowledge from Large Language Models [14.709233593021281]
The integration of external knowledge from Large Language Models (LLMs) presents a promising avenue for improving healthcare predictions.
We propose IntelliCare, a novel framework that leverages LLMs to provide high-quality patient-level external knowledge.
IntelliCare identifies patient cohorts and employs task-relevant statistical information to augment LLM understanding and generation.
arXiv Detail & Related papers (2024-08-23T13:56:00Z) - Leveraging Large Language Models for Patient Engagement: The Power of Conversational AI in Digital Health [1.8772687384996551]
Large language models (LLMs) have opened up new opportunities for transforming patient engagement in healthcare through conversational AI.
We showcase the power of LLMs in handling unstructured conversational data through four case studies.
arXiv Detail & Related papers (2024-06-19T16:02:04Z) - Large Language Models and User Trust: Consequence of Self-Referential Learning Loop and the Deskilling of Healthcare Professionals [1.6574413179773761]
This paper explores the evolving relationship between clinician trust in LLMs and the impact of data sources from predominantly human-generated to AI-generated content.
One of the primary concerns identified is the potential feedback loop that arises as LLMs become more reliant on their outputs for learning.
A key takeaway from our investigation is the critical role of user expertise and the necessity for a discerning approach to trusting and validating LLM outputs.
arXiv Detail & Related papers (2024-03-15T04:04:45Z) - AI Hospital: Benchmarking Large Language Models in a Multi-agent Medical Interaction Simulator [69.51568871044454]
We introduce textbfAI Hospital, a framework simulating dynamic medical interactions between emphDoctor as player and NPCs.
This setup allows for realistic assessments of LLMs in clinical scenarios.
We develop the Multi-View Medical Evaluation benchmark, utilizing high-quality Chinese medical records and NPCs.
arXiv Detail & Related papers (2024-02-15T06:46:48Z) - Retrieval Augmented Thought Process for Private Data Handling in Healthcare [53.89406286212502]
We introduce the Retrieval-Augmented Thought Process (RATP)
RATP formulates the thought generation of Large Language Models (LLMs)
On a private dataset of electronic medical records, RATP achieves 35% additional accuracy compared to in-context retrieval-augmented generation for the question-answering task.
arXiv Detail & Related papers (2024-02-12T17:17:50Z) - Large Language Models Illuminate a Progressive Pathway to Artificial
Healthcare Assistant: A Review [16.008511195589925]
Large language models (LLMs) have shown promising capabilities in mimicking human-level language comprehension and reasoning.
This paper provides a comprehensive review on the applications and implications of LLMs in medicine.
arXiv Detail & Related papers (2023-11-03T13:51:36Z) - MedAlign: A Clinician-Generated Dataset for Instruction Following with
Electronic Medical Records [60.35217378132709]
Large language models (LLMs) can follow natural language instructions with human-level fluency.
evaluating LLMs on realistic text generation tasks for healthcare remains challenging.
We introduce MedAlign, a benchmark dataset of 983 natural language instructions for EHR data.
arXiv Detail & Related papers (2023-08-27T12:24:39Z) - Self-Verification Improves Few-Shot Clinical Information Extraction [73.6905567014859]
Large language models (LLMs) have shown the potential to accelerate clinical curation via few-shot in-context learning.
They still struggle with issues regarding accuracy and interpretability, especially in mission-critical domains such as health.
Here, we explore a general mitigation framework using self-verification, which leverages the LLM to provide provenance for its own extraction and check its own outputs.
arXiv Detail & Related papers (2023-05-30T22:05:11Z) - Large Language Models for Healthcare Data Augmentation: An Example on
Patient-Trial Matching [49.78442796596806]
We propose an innovative privacy-aware data augmentation approach for patient-trial matching (LLM-PTM)
Our experiments demonstrate a 7.32% average improvement in performance using the proposed LLM-PTM method, and the generalizability to new data is improved by 12.12%.
arXiv Detail & Related papers (2023-03-24T03:14:00Z) - SPeC: A Soft Prompt-Based Calibration on Performance Variability of
Large Language Model in Clinical Notes Summarization [50.01382938451978]
We introduce a model-agnostic pipeline that employs soft prompts to diminish variance while preserving the advantages of prompt-based summarization.
Experimental findings indicate that our method not only bolsters performance but also effectively curbs variance for various language models.
arXiv Detail & Related papers (2023-03-23T04:47:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.