Self-Verification Improves Few-Shot Clinical Information Extraction
- URL: http://arxiv.org/abs/2306.00024v1
- Date: Tue, 30 May 2023 22:05:11 GMT
- Title: Self-Verification Improves Few-Shot Clinical Information Extraction
- Authors: Zelalem Gero, Chandan Singh, Hao Cheng, Tristan Naumann, Michel
Galley, Jianfeng Gao, Hoifung Poon
- Abstract summary: Large language models (LLMs) have shown the potential to accelerate clinical curation via few-shot in-context learning.
They still struggle with issues regarding accuracy and interpretability, especially in mission-critical domains such as health.
Here, we explore a general mitigation framework using self-verification, which leverages the LLM to provide provenance for its own extraction and check its own outputs.
- Score: 73.6905567014859
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Extracting patient information from unstructured text is a critical task in
health decision-support and clinical research. Large language models (LLMs)
have shown the potential to accelerate clinical curation via few-shot
in-context learning, in contrast to supervised learning which requires much
more costly human annotations. However, despite drastic advances in modern LLMs
such as GPT-4, they still struggle with issues regarding accuracy and
interpretability, especially in mission-critical domains such as health. Here,
we explore a general mitigation framework using self-verification, which
leverages the LLM to provide provenance for its own extraction and check its
own outputs. This is made possible by the asymmetry between verification and
generation, where the latter is often much easier than the former. Experimental
results show that our method consistently improves accuracy for various LLMs in
standard clinical information extraction tasks. Additionally, self-verification
yields interpretations in the form of a short text span corresponding to each
output, which makes it very efficient for human experts to audit the results,
paving the way towards trustworthy extraction of clinical information in
resource-constrained scenarios. To facilitate future research in this
direction, we release our code and prompts.
Related papers
- Reasoning-Enhanced Healthcare Predictions with Knowledge Graph Community Retrieval [61.70489848327436]
KARE is a novel framework that integrates knowledge graph (KG) community-level retrieval with large language models (LLMs) reasoning.
Extensive experiments demonstrate that KARE outperforms leading models by up to 10.8-15.0% on MIMIC-III and 12.6-12.7% on MIMIC-IV for mortality and readmission predictions.
arXiv Detail & Related papers (2024-10-06T18:46:28Z) - Automated Clinical Data Extraction with Knowledge Conditioned LLMs [7.935125803100394]
Large language models (LLMs) can be effective at interpreting unstructured text in reports, but they often hallucinate due to a lack of domain-specific knowledge.
We propose a novel framework that aligns generated internal knowledge with external knowledge through in-context learning (ICL)
Our framework employs a retriever to identify relevant units of internal or external knowledge and a grader to evaluate the truthfulness and helpfulness of the retrieved internal-knowledge rules.
arXiv Detail & Related papers (2024-06-26T02:49:28Z) - Attribute Structuring Improves LLM-Based Evaluation of Clinical Text
Summaries [62.32403630651586]
Large language models (LLMs) have shown the potential to generate accurate clinical text summaries, but still struggle with issues regarding grounding and evaluation.
Here, we explore a general mitigation framework using Attribute Structuring (AS), which structures the summary evaluation process.
AS consistently improves the correspondence between human annotations and automated metrics in clinical text summarization.
arXiv Detail & Related papers (2024-03-01T21:59:03Z) - C-ICL: Contrastive In-context Learning for Information Extraction [54.39470114243744]
c-ICL is a novel few-shot technique that leverages both correct and incorrect sample constructions to create in-context learning demonstrations.
Our experiments on various datasets indicate that c-ICL outperforms previous few-shot in-context learning methods.
arXiv Detail & Related papers (2024-02-17T11:28:08Z) - Natural Language Programming in Medicine: Administering Evidence Based Clinical Workflows with Autonomous Agents Powered by Generative Large Language Models [29.05425041393475]
Generative Large Language Models (LLMs) hold significant promise in healthcare.
This study assessed the potential of LLMs to function as autonomous agents in a simulated tertiary care medical center.
arXiv Detail & Related papers (2024-01-05T15:09:57Z) - LLMs Accelerate Annotation for Medical Information Extraction [7.743388571513413]
We propose an approach that combines Large Language Models (LLMs) with human expertise to create an efficient method for generating ground truth labels for medical text annotation.
We rigorously evaluate our method on a medical information extraction task, demonstrating that our approach not only substantially cuts down on human intervention but also maintains high accuracy.
arXiv Detail & Related papers (2023-12-04T19:26:13Z) - SPeC: A Soft Prompt-Based Calibration on Performance Variability of
Large Language Model in Clinical Notes Summarization [50.01382938451978]
We introduce a model-agnostic pipeline that employs soft prompts to diminish variance while preserving the advantages of prompt-based summarization.
Experimental findings indicate that our method not only bolsters performance but also effectively curbs variance for various language models.
arXiv Detail & Related papers (2023-03-23T04:47:46Z) - Retrieval-Augmented and Knowledge-Grounded Language Models for Faithful Clinical Medicine [68.7814360102644]
We propose the Re$3$Writer method with retrieval-augmented generation and knowledge-grounded reasoning.
We demonstrate the effectiveness of our method in generating patient discharge instructions.
arXiv Detail & Related papers (2022-10-23T16:34:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.