Question-Answering Based Summarization of Electronic Health Records
using Retrieval Augmented Generation
- URL: http://arxiv.org/abs/2401.01469v1
- Date: Wed, 3 Jan 2024 00:09:34 GMT
- Title: Question-Answering Based Summarization of Electronic Health Records
using Retrieval Augmented Generation
- Authors: Walid Saba, Suzanne Wendelken and James. Shanahan
- Abstract summary: We propose a method that mitigates shortcomings by combining semantic search, retrieval augmented generation and question-answering.
Our approach is quite efficient; requires minimal to no training; does not suffer from the 'hallucination' problem of LLMs.
It ensures diversity, since the summary will not have repeated content but diverse answers to specific questions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Summarization of electronic health records (EHRs) can substantially minimize
'screen time' for both patients as well as medical personnel. In recent years
summarization of EHRs have employed machine learning pipelines using state of
the art neural models. However, these models have produced less than adequate
results that are attributed to the difficulty of obtaining sufficient annotated
data for training. Moreover, the requirement to consider the entire content of
an EHR in summarization has resulted in poor performance due to the fact that
attention mechanisms in modern large language models (LLMs) adds a quadratic
complexity in terms of the size of the input. We propose here a method that
mitigates these shortcomings by combining semantic search, retrieval augmented
generation (RAG) and question-answering using the latest LLMs. In our approach
summarization is the extraction of answers to specific questions that are
deemed important by subject-matter experts (SMEs). Our approach is quite
efficient; requires minimal to no training; does not suffer from the
'hallucination' problem of LLMs; and it ensures diversity, since the summary
will not have repeated content but diverse answers to specific questions.
Related papers
- RA-BLIP: Multimodal Adaptive Retrieval-Augmented Bootstrapping Language-Image Pre-training [55.54020926284334]
Multimodal Large Language Models (MLLMs) have recently received substantial interest, which shows their emerging potential as general-purpose models for various vision-language tasks.
Retrieval augmentation techniques have proven to be effective plugins for both LLMs and MLLMs.
In this study, we propose multimodal adaptive Retrieval-Augmented Bootstrapping Language-Image Pre-training (RA-BLIP), a novel retrieval-augmented framework for various MLLMs.
arXiv Detail & Related papers (2024-10-18T03:45:19Z) - RespLLM: Unifying Audio and Text with Multimodal LLMs for Generalized Respiratory Health Prediction [20.974460332254544]
RespLLM is a novel framework that unifies text and audio representations for respiratory health prediction.
Our work lays the foundation for multimodal models that can perceive, listen, and understand heterogeneous data.
arXiv Detail & Related papers (2024-10-07T17:06:11Z) - Crafting Interpretable Embeddings by Asking LLMs Questions [89.49960984640363]
Large language models (LLMs) have rapidly improved text embeddings for a growing array of natural-language processing tasks.
We introduce question-answering embeddings (QA-Emb), embeddings where each feature represents an answer to a yes/no question asked to an LLM.
We use QA-Emb to flexibly generate interpretable models for predicting fMRI voxel responses to language stimuli.
arXiv Detail & Related papers (2024-05-26T22:30:29Z) - Groundedness in Retrieval-augmented Long-form Generation: An Empirical Study [61.74571814707054]
We evaluate whether every generated sentence is grounded in retrieved documents or the model's pre-training data.
Across 3 datasets and 4 model families, our findings reveal that a significant fraction of generated sentences are consistently ungrounded.
Our results show that while larger models tend to ground their outputs more effectively, a significant portion of correct answers remains compromised by hallucinations.
arXiv Detail & Related papers (2024-04-10T14:50:10Z) - Graph-Based Retriever Captures the Long Tail of Biomedical Knowledge [2.2814097119704058]
Large language models (LLMs) are transforming the way information is retrieved with vast amounts of knowledge being summarized and presented.
LLMs are prone to highlight the most frequently seen pieces of information from the training set and to neglect the rare ones.
We introduce a novel information-retrieval method that leverages a knowledge graph to downsample these clusters and mitigate the information overload problem.
arXiv Detail & Related papers (2024-02-19T18:31:11Z) - A Question Answering Based Pipeline for Comprehensive Chinese EHR
Information Extraction [3.411065529290054]
We propose a novel approach that automatically generates training data for transfer learning of question answering models.
Our pipeline incorporates a preprocessing module to handle challenges posed by extraction types.
The obtained QA model exhibits excellent performance on subtasks of information extraction in EHRs.
arXiv Detail & Related papers (2024-02-17T02:55:35Z) - Large Language Model Distilling Medication Recommendation Model [61.89754499292561]
We harness the powerful semantic comprehension and input-agnostic characteristics of Large Language Models (LLMs)
Our research aims to transform existing medication recommendation methodologies using LLMs.
To mitigate this, we have developed a feature-level knowledge distillation technique, which transfers the LLM's proficiency to a more compact model.
arXiv Detail & Related papers (2024-02-05T08:25:22Z) - Prompting Large Language Models for Zero-Shot Clinical Prediction with
Structured Longitudinal Electronic Health Record Data [7.815738943706123]
Large Language Models (LLMs) are traditionally tailored for natural language processing.
This research investigates the adaptability of LLMs, like GPT-4, to EHR data.
In response to the longitudinal, sparse, and knowledge-infused nature of EHR data, our prompting approach involves taking into account specific characteristics.
arXiv Detail & Related papers (2024-01-25T20:14:50Z) - Towards Mitigating Hallucination in Large Language Models via
Self-Reflection [63.2543947174318]
Large language models (LLMs) have shown promise for generative and knowledge-intensive tasks including question-answering (QA) tasks.
This paper analyses the phenomenon of hallucination in medical generative QA systems using widely adopted LLMs and datasets.
arXiv Detail & Related papers (2023-10-10T03:05:44Z) - Medical Question Summarization with Entity-driven Contrastive Learning [12.008269098530386]
This paper proposes a novel medical question summarization framework using entity-driven contrastive learning (ECL)
ECL employs medical entities in frequently asked questions (FAQs) as focuses and devises an effective mechanism to generate hard negative samples.
We find that some MQA datasets suffer from serious data leakage problems, such as the iCliniq dataset's 33% duplicate rate.
arXiv Detail & Related papers (2023-04-15T00:19:03Z) - SPeC: A Soft Prompt-Based Calibration on Performance Variability of
Large Language Model in Clinical Notes Summarization [50.01382938451978]
We introduce a model-agnostic pipeline that employs soft prompts to diminish variance while preserving the advantages of prompt-based summarization.
Experimental findings indicate that our method not only bolsters performance but also effectively curbs variance for various language models.
arXiv Detail & Related papers (2023-03-23T04:47:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.