Clinfo.ai: An Open-Source Retrieval-Augmented Large Language Model
System for Answering Medical Questions using Scientific Literature
- URL: http://arxiv.org/abs/2310.16146v1
- Date: Tue, 24 Oct 2023 19:43:39 GMT
- Title: Clinfo.ai: An Open-Source Retrieval-Augmented Large Language Model
System for Answering Medical Questions using Scientific Literature
- Authors: Alejandro Lozano, Scott L Fleming, Chia-Chun Chiang, and Nigam Shah
- Abstract summary: We release Clinfo.ai, an open-source WebApp that answers clinical questions based on dynamically retrieved scientific literature.
We report benchmark results for Clinfo.ai and other publicly available OpenQA systems on PubMedRS-200.
- Score: 44.715854387549605
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The quickly-expanding nature of published medical literature makes it
challenging for clinicians and researchers to keep up with and summarize
recent, relevant findings in a timely manner. While several closed-source
summarization tools based on large language models (LLMs) now exist, rigorous
and systematic evaluations of their outputs are lacking. Furthermore, there is
a paucity of high-quality datasets and appropriate benchmark tasks with which
to evaluate these tools. We address these issues with four contributions: we
release Clinfo.ai, an open-source WebApp that answers clinical questions based
on dynamically retrieved scientific literature; we specify an information
retrieval and abstractive summarization task to evaluate the performance of
such retrieval-augmented LLM systems; we release a dataset of 200 questions and
corresponding answers derived from published systematic reviews, which we name
PubMed Retrieval and Synthesis (PubMedRS-200); and report benchmark results for
Clinfo.ai and other publicly available OpenQA systems on PubMedRS-200.
Related papers
- Comprehensive and Practical Evaluation of Retrieval-Augmented Generation Systems for Medical Question Answering [70.44269982045415]
Retrieval-augmented generation (RAG) has emerged as a promising approach to enhance the performance of large language models (LLMs)
We introduce Medical Retrieval-Augmented Generation Benchmark (MedRGB) that provides various supplementary elements to four medical QA datasets.
Our experimental results reveals current models' limited ability to handle noise and misinformation in the retrieved documents.
arXiv Detail & Related papers (2024-11-14T06:19:18Z) - Leveraging Large Language Models for Medical Information Extraction and Query Generation [2.1793134762413433]
This paper introduces a system that integrates large language models (LLMs) into the clinical trial retrieval process.
We evaluate six LLMs for query generation, focusing on open-source and relatively small models that require minimal computational resources.
arXiv Detail & Related papers (2024-10-31T12:01:51Z) - AutoMIR: Effective Zero-Shot Medical Information Retrieval without Relevance Labels [19.90354530235266]
We introduce a novel approach called Self-Learning Hypothetical Document Embeddings (SL-HyDE) to tackle this issue.
SL-HyDE leverages large language models (LLMs) as generators to generate hypothetical documents based on a given query.
We present the Chinese Medical Information Retrieval Benchmark (CMIRB), a comprehensive evaluation framework grounded in real-world medical scenarios.
arXiv Detail & Related papers (2024-10-26T02:53:20Z) - DiscoveryBench: Towards Data-Driven Discovery with Large Language Models [50.36636396660163]
We present DiscoveryBench, the first comprehensive benchmark that formalizes the multi-step process of data-driven discovery.
Our benchmark contains 264 tasks collected across 6 diverse domains, such as sociology and engineering.
Our benchmark, thus, illustrates the challenges in autonomous data-driven discovery and serves as a valuable resource for the community to make progress.
arXiv Detail & Related papers (2024-07-01T18:58:22Z) - Comparative Analysis of Open-Source Language Models in Summarizing Medical Text Data [5.443548415516227]
Large Language Models (LLMs) have demonstrated superior performance in question answering and summarization tasks on unstructured text data.
We propose an evaluation approach to analyze the performance of open-source LLMs for medical summarization tasks.
arXiv Detail & Related papers (2024-05-25T16:16:22Z) - Large Language Models in the Clinic: A Comprehensive Benchmark [63.21278434331952]
We build a benchmark ClinicBench to better understand large language models (LLMs) in the clinic.
We first collect eleven existing datasets covering diverse clinical language generation, understanding, and reasoning tasks.
We then construct six novel datasets and clinical tasks that are complex but common in real-world practice.
We conduct an extensive evaluation of twenty-two LLMs under both zero-shot and few-shot settings.
arXiv Detail & Related papers (2024-04-25T15:51:06Z) - CSMeD: Bridging the Dataset Gap in Automated Citation Screening for
Systematic Literature Reviews [10.207938863784829]
We introduce CSMeD, a meta-dataset consolidating nine publicly released collections.
CSMeD serves as a comprehensive resource for training and evaluating the performance of automated citation screening models.
We introduce CSMeD-FT, a new dataset designed explicitly for evaluating the full text publication screening task.
arXiv Detail & Related papers (2023-11-21T09:36:11Z) - Development and validation of a natural language processing algorithm to
pseudonymize documents in the context of a clinical data warehouse [53.797797404164946]
The study highlights the difficulties faced in sharing tools and resources in this domain.
We annotated a corpus of clinical documents according to 12 types of identifying entities.
We build a hybrid system, merging the results of a deep learning model as well as manual rules.
arXiv Detail & Related papers (2023-03-23T17:17:46Z) - MS2: Multi-Document Summarization of Medical Studies [11.38740406132287]
We release MS2 (Multi-Document Summarization of Medical Studies), a dataset of over 470k documents and 20k summaries derived from the scientific literature.
This dataset facilitates the development of systems that can assess and aggregate contradictory evidence across multiple studies.
We experiment with a summarization system based on BART, with promising early results.
arXiv Detail & Related papers (2021-04-13T19:59:34Z) - CAiRE-COVID: A Question Answering and Query-focused Multi-Document
Summarization System for COVID-19 Scholarly Information Management [48.251211691263514]
We present CAiRE-COVID, a real-time question answering (QA) and multi-document summarization system, which won one of the 10 tasks in the Kaggle COVID-19 Open Research dataset Challenge.
Our system aims to tackle the recent challenge of mining the numerous scientific articles being published on COVID-19 by answering high priority questions from the community.
arXiv Detail & Related papers (2020-05-04T15:07:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.