How well do LLMs cite relevant medical references? An evaluation
framework and analyses
- URL: http://arxiv.org/abs/2402.02008v1
- Date: Sat, 3 Feb 2024 03:44:57 GMT
- Title: How well do LLMs cite relevant medical references? An evaluation
framework and analyses
- Authors: Kevin Wu, Eric Wu, Ally Cassasola, Angela Zhang, Kevin Wei, Teresa
Nguyen, Sith Riantawan, Patricia Shi Riantawan, Daniel E. Ho, James Zou
- Abstract summary: Large language models (LLMs) are currently being used to answer medical questions across a variety of clinical domains.
In this paper, we ask: do the sources that LLMs generate actually support the claims that they make?
We demonstrate that GPT-4 is highly accurate in validating source relevance, agreeing 88% of the time with a panel of medical doctors.
- Score: 18.1921791355309
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large language models (LLMs) are currently being used to answer medical
questions across a variety of clinical domains. Recent top-performing
commercial LLMs, in particular, are also capable of citing sources to support
their responses. In this paper, we ask: do the sources that LLMs generate
actually support the claims that they make? To answer this, we propose three
contributions. First, as expert medical annotations are an expensive and
time-consuming bottleneck for scalable evaluation, we demonstrate that GPT-4 is
highly accurate in validating source relevance, agreeing 88% of the time with a
panel of medical doctors. Second, we develop an end-to-end, automated pipeline
called \textit{SourceCheckup} and use it to evaluate five top-performing LLMs
on a dataset of 1200 generated questions, totaling over 40K pairs of statements
and sources. Interestingly, we find that between ~50% to 90% of LLM responses
are not fully supported by the sources they provide. We also evaluate GPT-4
with retrieval augmented generation (RAG) and find that, even still, around
30\% of individual statements are unsupported, while nearly half of its
responses are not fully supported. Third, we open-source our curated dataset of
medical questions and expert annotations for future evaluations. Given the
rapid pace of LLM development and the potential harms of incorrect or outdated
medical information, it is crucial to also understand and quantify their
capability to produce relevant, trustworthy medical references.
Related papers
- Grounding by Trying: LLMs with Reinforcement Learning-Enhanced Retrieval [55.63711219190506]
Large language models (LLMs) often struggle with posing the right search queries.
We introduce $underlineLe$arning to $underlineRe$trieve by $underlineT$rying (LeReT)
LeReT can improve the absolute retrieval accuracy by up to 29% and the downstream generator evaluations by 17%.
arXiv Detail & Related papers (2024-10-30T17:02:54Z) - GMAI-MMBench: A Comprehensive Multimodal Evaluation Benchmark Towards General Medical AI [67.09501109871351]
Large Vision-Language Models (LVLMs) are capable of handling diverse data types such as imaging, text, and physiological signals.
GMAI-MMBench is the most comprehensive general medical AI benchmark with well-categorized data structure and multi-perceptual granularity to date.
It is constructed from 284 datasets across 38 medical image modalities, 18 clinical-related tasks, 18 departments, and 4 perceptual granularities in a Visual Question Answering (VQA) format.
arXiv Detail & Related papers (2024-08-06T17:59:21Z) - How do you know that? Teaching Generative Language Models to Reference Answers to Biomedical Questions [0.0]
Large language models (LLMs) have recently become the leading source of answers for users' questions online.
Despite their ability to offer eloquent answers, their accuracy and reliability can pose a significant challenge.
This paper introduces a biomedical retrieval-augmented generation (RAG) system designed to enhance the reliability of generated responses.
arXiv Detail & Related papers (2024-07-06T09:10:05Z) - Answering real-world clinical questions using large language model based systems [2.2605659089865355]
Large language models (LLMs) could potentially address both challenges by either summarizing published literature or generating new studies based on real-world data (RWD)
We evaluated the ability of five LLM-based systems in answering 50 clinical questions and had nine independent physicians review the responses for relevance, reliability, and actionability.
arXiv Detail & Related papers (2024-06-29T22:39:20Z) - MedExQA: Medical Question Answering Benchmark with Multiple Explanations [2.2246416434538308]
This paper introduces MedExQA, a novel benchmark in medical question-answering to evaluate large language models' (LLMs) understanding of medical knowledge through explanations.
By constructing datasets across five distinct medical specialties, we address a major gap in current medical QA benchmarks.
Our work highlights the importance of explainability in medical LLMs, proposes an effective methodology for evaluating models beyond classification accuracy, and sheds light on one specific domain, speech language pathology.
arXiv Detail & Related papers (2024-06-10T14:47:04Z) - OLAPH: Improving Factuality in Biomedical Long-form Question Answering [15.585833125854418]
We introduce MedLFQA, a benchmark dataset reconstructed using long-form question-answering datasets related to the biomedical domain.
We also propose OLAPH, a simple and novel framework that utilizes cost-effective and multifaceted automatic evaluation.
Our findings reveal that a 7B LLM trained with our OLAPH framework can provide long answers comparable to the medical experts' answers in terms of factuality.
arXiv Detail & Related papers (2024-05-21T11:50:16Z) - A Survey of Large Language Models in Medicine: Progress, Application, and Challenge [85.09998659355038]
Large language models (LLMs) have received substantial attention due to their capabilities for understanding and generating human language.
This review aims to provide a detailed overview of the development and deployment of LLMs in medicine.
arXiv Detail & Related papers (2023-11-09T02:55:58Z) - Augmenting Black-box LLMs with Medical Textbooks for Biomedical Question Answering (Published in Findings of EMNLP 2024) [48.17095875619711]
We present a system called LLMs Augmented with Medical Textbooks (LLM-AMT)
LLM-AMT integrates authoritative medical textbooks into the LLMs' framework using plug-and-play modules.
We found that medical textbooks as a retrieval corpus is proven to be a more effective knowledge database than Wikipedia in the medical domain.
arXiv Detail & Related papers (2023-09-05T13:39:38Z) - MedAlign: A Clinician-Generated Dataset for Instruction Following with
Electronic Medical Records [60.35217378132709]
Large language models (LLMs) can follow natural language instructions with human-level fluency.
evaluating LLMs on realistic text generation tasks for healthcare remains challenging.
We introduce MedAlign, a benchmark dataset of 983 natural language instructions for EHR data.
arXiv Detail & Related papers (2023-08-27T12:24:39Z) - Appraising the Potential Uses and Harms of LLMs for Medical Systematic
Reviews [21.546144601311187]
Large language models (LLMs) offer potential to automatically generate literature reviews on demand.
LLMs sometimes generate inaccurate (and potentially misleading) texts by hallucination or omission.
arXiv Detail & Related papers (2023-05-19T17:09:19Z) - Statistical Knowledge Assessment for Large Language Models [79.07989821512128]
Given varying prompts regarding a factoid question, can a large language model (LLM) reliably generate factually correct answers?
We propose KaRR, a statistical approach to assess factual knowledge for LLMs.
Our results reveal that the knowledge in LLMs with the same backbone architecture adheres to the scaling law, while tuning on instruction-following data sometimes compromises the model's capability to generate factually correct text reliably.
arXiv Detail & Related papers (2023-05-17T18:54:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.