Improving Patient Pre-screening for Clinical Trials: Assisting
Physicians with Large Language Models
- URL: http://arxiv.org/abs/2304.07396v2
- Date: Thu, 29 Jun 2023 12:59:16 GMT
- Title: Improving Patient Pre-screening for Clinical Trials: Assisting
Physicians with Large Language Models
- Authors: Danny M. den Hamer, Perry Schoor, Tobias B. Polak and Daniel Kapitan
- Abstract summary: Large Language Models (LLMs) have shown to perform well for clinical information extraction and clinical reasoning.
This paper investigates the use of InstructGPT to assist physicians in determining eligibility for clinical trials based on a patient's summarised medical profile.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Physicians considering clinical trials for their patients are met with the
laborious process of checking many text based eligibility criteria. Large
Language Models (LLMs) have shown to perform well for clinical information
extraction and clinical reasoning, including medical tests, but not yet in
real-world scenarios. This paper investigates the use of InstructGPT to assist
physicians in determining eligibility for clinical trials based on a patient's
summarised medical profile. Using a prompting strategy combining one-shot,
selection-inference and chain-of-thought techniques, we investigate the
performance of LLMs on 10 synthetically created patient profiles. Performance
is evaluated at four levels: ability to identify screenable eligibility
criteria from a trial given a medical profile; ability to classify for each
individual criterion whether the patient qualifies; the overall classification
whether a patient is eligible for a clinical trial and the percentage of
criteria to be screened by physician. We evaluated against 146 clinical trials
and a total of 4,135 eligibility criteria. The LLM was able to correctly
identify the screenability of 72% (2,994/4,135) of the criteria. Additionally,
72% (341/471) of the screenable criteria were evaluated correctly. The
resulting trial level classification as eligible or ineligible resulted in a
recall of 0.5. By leveraging LLMs with a physician-in-the-loop, a recall of 1.0
and precision of 0.71 on clinical trial level can be achieved while reducing
the amount of criteria to be checked by an estimated 90%. LLMs can be used to
assist physicians with pre-screening of patients for clinical trials. By
forcing instruction-tuned LLMs to produce chain-of-thought responses, the
reasoning can be made transparent to and the decision process becomes amenable
by physicians, thereby making such a system feasible for use in real-world
scenarios.
Related papers
- CliMedBench: A Large-Scale Chinese Benchmark for Evaluating Medical Large Language Models in Clinical Scenarios [50.032101237019205]
CliMedBench is a comprehensive benchmark with 14 expert-guided core clinical scenarios.
The reliability of this benchmark has been confirmed in several ways.
arXiv Detail & Related papers (2024-10-04T15:15:36Z) - Controlled LLM-based Reasoning for Clinical Trial Retrieval [0.4199844472131922]
We propose a scalable method that extends the capabilities of LLMs in the direction of systematizing the reasoning over sets of medical eligibility criteria.
The proposed method is evaluated on TREC 2022 Clinical Trials, achieving results superior to the state-of-the-art: NDCG@10 of 0.693 and Precision@10 of 0.73.
arXiv Detail & Related papers (2024-09-19T09:42:33Z) - End-To-End Clinical Trial Matching with Large Language Models [0.6151041580858937]
We present an end-to-end pipeline for clinical trial matching using Large Language Models (LLMs)
Our approach identifies relevant candidate trials in 93.3% of cases and achieves a preliminary accuracy of 88.0%.
Our fully end-to-end pipeline can operate autonomously or with human supervision and is not restricted to oncology.
arXiv Detail & Related papers (2024-07-18T12:36:26Z) - Zero-Shot Clinical Trial Patient Matching with LLMs [40.31971412825736]
Large language models (LLMs) offer a promising solution to automated screening.
We design an LLM-based system which, given a patient's medical history as unstructured clinical text, evaluates whether that patient meets a set of inclusion criteria.
Our system achieves state-of-the-art scores on the n2c2 2018 cohort selection benchmark.
arXiv Detail & Related papers (2024-02-05T00:06:08Z) - Matching Patients to Clinical Trials with Large Language Models [29.265158319106604]
We introduce TrialGPT, a first-of-its-kind large language model (LLM) framework to assist patient-to-trial matching.
Given a patient note, TrialGPT predicts the patient's eligibility on a criterion-by-criterion basis.
We evaluate the trial-level prediction performance of TrialGPT on three publicly available cohorts of 184 patients with over 18,000 trial annotations.
arXiv Detail & Related papers (2023-07-27T17:56:56Z) - TREEMENT: Interpretable Patient-Trial Matching via Personalized Dynamic
Tree-Based Memory Network [54.332862955411656]
Clinical trials are critical for drug development but often suffer from expensive and inefficient patient recruitment.
In recent years, machine learning models have been proposed for speeding up patient recruitment via automatically matching patients with clinical trials.
We introduce a dynamic tree-based memory network model named TREEMENT to provide accurate and interpretable patient trial matching.
arXiv Detail & Related papers (2023-07-19T12:35:09Z) - AutoTrial: Prompting Language Models for Clinical Trial Design [53.630479619856516]
We present a method named AutoTrial to aid the design of clinical eligibility criteria using language models.
Experiments on over 70K clinical trials verify that AutoTrial generates high-quality criteria texts.
arXiv Detail & Related papers (2023-05-19T01:04:16Z) - Towards Fair Patient-Trial Matching via Patient-Criterion Level Fairness
Constraint [50.35075018041199]
This work proposes a fair patient-trial matching framework by generating a patient-criterion level fairness constraint.
The experimental results on real-world patient-trial and patient-criterion matching tasks demonstrate that the proposed framework can successfully alleviate the predictions that tend to be biased.
arXiv Detail & Related papers (2023-03-24T03:59:19Z) - The Leaf Clinical Trials Corpus: a new resource for query generation
from clinical trial eligibility criteria [1.7205106391379026]
We introduce the Leaf Clinical Trials (LCT) corpus, a human-annotated corpus of over 1,000 clinical trial eligibility criteria descriptions.
We provide details of our schema, annotation process, corpus quality, and statistics.
arXiv Detail & Related papers (2022-07-27T19:22:24Z) - Clinical trial site matching with improved diversity using fair policy
learning [56.01170456417214]
We learn a model that maps a clinical trial description to a ranked list of potential trial sites.
Unlike existing fairness frameworks, the group membership of each trial site is non-binary.
We propose fairness criteria based on demographic parity to address such a multi-group membership scenario.
arXiv Detail & Related papers (2022-04-13T16:35:28Z) - Contextual Constrained Learning for Dose-Finding Clinical Trials [102.8283665750281]
C3T-Budget is a contextual constrained clinical trial algorithm for dose-finding under both budget and safety constraints.
It recruits patients with consideration of the remaining budget, the remaining time, and the characteristics of each group.
arXiv Detail & Related papers (2020-01-08T11:46:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.