Distilling Large Language Models for Matching Patients to Clinical
Trials
- URL: http://arxiv.org/abs/2312.09958v1
- Date: Fri, 15 Dec 2023 17:11:07 GMT
- Title: Distilling Large Language Models for Matching Patients to Clinical
Trials
- Authors: Mauro Nievas, Aditya Basu, Yanshan Wang, Hrituraj Singh
- Abstract summary: The recent success of large language models (LLMs) has paved the way for their adoption in the high-stakes domain of healthcare.
This study presents the first systematic examination of the efficacy of both proprietary (GPT-3.5, and GPT-4) and open-source LLMs (LLAMA 7B,13B, and 70B) for the task of patient-trial matching.
Our findings reveal that open-source LLMs, when fine-tuned on this limited and synthetic dataset, demonstrate performance parity with their proprietary counterparts.
- Score: 3.4068841624198942
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The recent success of large language models (LLMs) has paved the way for
their adoption in the high-stakes domain of healthcare. Specifically, the
application of LLMs in patient-trial matching, which involves assessing patient
eligibility against clinical trial's nuanced inclusion and exclusion criteria,
has shown promise. Recent research has shown that GPT-3.5, a widely recognized
LLM developed by OpenAI, can outperform existing methods with minimal 'variable
engineering' by simply comparing clinical trial information against patient
summaries. However, there are significant challenges associated with using
closed-source proprietary LLMs like GPT-3.5 in practical healthcare
applications, such as cost, privacy and reproducibility concerns. To address
these issues, this study presents the first systematic examination of the
efficacy of both proprietary (GPT-3.5, and GPT-4) and open-source LLMs (LLAMA
7B,13B, and 70B) for the task of patient-trial matching. Employing a
multifaceted evaluation framework, we conducted extensive automated and
human-centric assessments coupled with a detailed error analysis for each
model. To enhance the adaptability of open-source LLMs, we have created a
specialized synthetic dataset utilizing GPT-4, enabling effective fine-tuning
under constrained data conditions. Our findings reveal that open-source LLMs,
when fine-tuned on this limited and synthetic dataset, demonstrate performance
parity with their proprietary counterparts. This presents a massive opportunity
for their deployment in real-world healthcare applications. To foster further
research and applications in this field, we release both the annotated
evaluation dataset along with the fine-tuned LLM -- Trial-LLAMA -- for public
use.
Related papers
- Using Large Language Models for Expert Prior Elicitation in Predictive Modelling [53.54623137152208]
This study proposes using large language models (LLMs) to elicit expert prior distributions for predictive models.
We compare LLM-elicited and uninformative priors, evaluate whether LLMs truthfully generate parameter distributions, and propose a model selection strategy for in-context learning and prior elicitation.
Our findings show that LLM-elicited prior parameter distributions significantly reduce predictive error compared to uninformative priors in low-data settings.
arXiv Detail & Related papers (2024-11-26T10:13:39Z) - Leveraging Large Language Models for Medical Information Extraction and Query Generation [2.1793134762413433]
This paper introduces a system that integrates large language models (LLMs) into the clinical trial retrieval process.
We evaluate six LLMs for query generation, focusing on open-source and relatively small models that require minimal computational resources.
arXiv Detail & Related papers (2024-10-31T12:01:51Z) - PALLM: Evaluating and Enhancing PALLiative Care Conversations with Large Language Models [10.258261180305439]
Large language models (LLMs) offer a new approach to assessing complex communication metrics.
LLMs offer the potential to advance the field through integration into passive sensing and just-in-time intervention systems.
This study explores LLMs as evaluators of palliative care communication quality, leveraging their linguistic, in-context learning, and reasoning capabilities.
arXiv Detail & Related papers (2024-09-23T16:39:12Z) - GMAI-MMBench: A Comprehensive Multimodal Evaluation Benchmark Towards General Medical AI [67.09501109871351]
Large Vision-Language Models (LVLMs) are capable of handling diverse data types such as imaging, text, and physiological signals.
GMAI-MMBench is the most comprehensive general medical AI benchmark with well-categorized data structure and multi-perceptual granularity to date.
It is constructed from 284 datasets across 38 medical image modalities, 18 clinical-related tasks, 18 departments, and 4 perceptual granularities in a Visual Question Answering (VQA) format.
arXiv Detail & Related papers (2024-08-06T17:59:21Z) - Adapting Open-Source Large Language Models for Cost-Effective, Expert-Level Clinical Note Generation with On-Policy Reinforcement Learning [19.08691249610632]
This study presents a comprehensive domain- and task-specific adaptation process for the open-source LLaMA-2 13 billion parameter model.
We introduce a new approach, DistillDirect, for performing on-policy reinforcement learning with Gemini 1.0 Pro as the teacher model.
Our model, LLaMA-Clinic, can generate clinical notes comparable in quality to those authored by physicians.
arXiv Detail & Related papers (2024-04-25T15:34:53Z) - PRISM: Patient Records Interpretation for Semantic Clinical Trial Matching using Large Language Models [4.438101430231511]
We present the first, end-to-end large-scale empirical evaluation of clinical trial matching using real-world EHRs.
Our study showcases the capability of LLMs to accurately match patients with appropriate clinical trials.
arXiv Detail & Related papers (2024-04-23T22:33:19Z) - Large Language Model Distilling Medication Recommendation Model [61.89754499292561]
We harness the powerful semantic comprehension and input-agnostic characteristics of Large Language Models (LLMs)
Our research aims to transform existing medication recommendation methodologies using LLMs.
To mitigate this, we have developed a feature-level knowledge distillation technique, which transfers the LLM's proficiency to a more compact model.
arXiv Detail & Related papers (2024-02-05T08:25:22Z) - ReEval: Automatic Hallucination Evaluation for Retrieval-Augmented Large Language Models via Transferable Adversarial Attacks [91.55895047448249]
This paper presents ReEval, an LLM-based framework using prompt chaining to perturb the original evidence for generating new test cases.
We implement ReEval using ChatGPT and evaluate the resulting variants of two popular open-domain QA datasets.
Our generated data is human-readable and useful to trigger hallucination in large language models.
arXiv Detail & Related papers (2023-10-19T06:37:32Z) - Self-Verification Improves Few-Shot Clinical Information Extraction [73.6905567014859]
Large language models (LLMs) have shown the potential to accelerate clinical curation via few-shot in-context learning.
They still struggle with issues regarding accuracy and interpretability, especially in mission-critical domains such as health.
Here, we explore a general mitigation framework using self-verification, which leverages the LLM to provide provenance for its own extraction and check its own outputs.
arXiv Detail & Related papers (2023-05-30T22:05:11Z) - Large Language Models for Healthcare Data Augmentation: An Example on
Patient-Trial Matching [49.78442796596806]
We propose an innovative privacy-aware data augmentation approach for patient-trial matching (LLM-PTM)
Our experiments demonstrate a 7.32% average improvement in performance using the proposed LLM-PTM method, and the generalizability to new data is improved by 12.12%.
arXiv Detail & Related papers (2023-03-24T03:14:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.