ClinicalGPT: Large Language Models Finetuned with Diverse Medical Data
and Comprehensive Evaluation
- URL: http://arxiv.org/abs/2306.09968v1
- Date: Fri, 16 Jun 2023 16:56:32 GMT
- Title: ClinicalGPT: Large Language Models Finetuned with Diverse Medical Data
and Comprehensive Evaluation
- Authors: Guangyu Wang, Guoxing Yang, Zongxin Du, Longjun Fan, Xiaohu Li
- Abstract summary: Large language models have exhibited exceptional performance on various Natural Language Processing (NLP) tasks.
Despite these advances, their effectiveness in medical applications is limited, due to challenges such as factual inaccuracies, reasoning abilities, and lack grounding in real-world experience.
We present ClinicalGPT, a language model explicitly designed and optimized for clinical scenarios.
- Score: 5.690250818139763
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large language models have exhibited exceptional performance on various
Natural Language Processing (NLP) tasks, leveraging techniques such as the
pre-training, and instruction fine-tuning. Despite these advances, their
effectiveness in medical applications is limited, due to challenges such as
factual inaccuracies, reasoning abilities, and lack grounding in real-world
experience. In this study, we present ClinicalGPT, a language model explicitly
designed and optimized for clinical scenarios. By incorporating extensive and
diverse real-world data, such as medical records, domain-specific knowledge,
and multi-round dialogue consultations in the training process, ClinicalGPT is
better prepared to handle multiple clinical task. Furthermore, we introduce a
comprehensive evaluation framework that includes medical knowledge
question-answering, medical exams, patient consultations, and diagnostic
analysis of medical records. Our results demonstrate that ClinicalGPT
significantly outperforms other models in these tasks, highlighting the
effectiveness of our approach in adapting large language models to the critical
domain of healthcare.
Related papers
- A Survey of Medical Vision-and-Language Applications and Their Techniques [48.268198631277315]
Medical vision-and-language models (MVLMs) have attracted substantial interest due to their capability to offer a natural language interface for interpreting complex medical data.
Here, we provide a comprehensive overview of MVLMs and the various medical tasks to which they have been applied.
We also examine the datasets used for these tasks and compare the performance of different models based on standardized evaluation metrics.
arXiv Detail & Related papers (2024-11-19T03:27:05Z) - Demystifying Large Language Models for Medicine: A Primer [50.83806796466396]
Large language models (LLMs) represent a transformative class of AI tools capable of revolutionizing various aspects of healthcare.
This tutorial aims to equip healthcare professionals with the tools necessary to effectively integrate LLMs into clinical practice.
arXiv Detail & Related papers (2024-10-24T15:41:56Z) - RuleAlign: Making Large Language Models Better Physicians with Diagnostic Rule Alignment [54.91736546490813]
We introduce the RuleAlign framework, designed to align Large Language Models with specific diagnostic rules.
We develop a medical dialogue dataset comprising rule-based communications between patients and physicians.
Experimental results demonstrate the effectiveness of the proposed approach.
arXiv Detail & Related papers (2024-08-22T17:44:40Z) - A Comprehensive Survey on Evaluating Large Language Model Applications in the Medical Industry [2.1717945745027425]
Large Language Models (LLMs) have evolved significantly, impacting various industries with their advanced capabilities in language understanding and generation.
This comprehensive survey delineates the extensive application and requisite evaluation of LLMs within healthcare.
Our survey is structured to provide an in-depth analysis of LLM applications across clinical settings, medical text data processing, research, education, and public health awareness.
arXiv Detail & Related papers (2024-04-24T09:55:24Z) - AI Hospital: Benchmarking Large Language Models in a Multi-agent Medical Interaction Simulator [69.51568871044454]
We introduce textbfAI Hospital, a framework simulating dynamic medical interactions between emphDoctor as player and NPCs.
This setup allows for realistic assessments of LLMs in clinical scenarios.
We develop the Multi-View Medical Evaluation benchmark, utilizing high-quality Chinese medical records and NPCs.
arXiv Detail & Related papers (2024-02-15T06:46:48Z) - Preserving the knowledge of long clinical texts using aggregated
ensembles of large language models [0.0]
Clinical texts contain rich and valuable information that can be used for various clinical outcome prediction tasks.
Applying large language models, such as BERT-based models, to clinical texts poses two major challenges.
This paper proposes a novel method to preserve the knowledge of long clinical texts using aggregated ensembles of large language models.
arXiv Detail & Related papers (2023-11-02T19:50:02Z) - Emulating Human Cognitive Processes for Expert-Level Medical
Question-Answering with Large Language Models [0.23463422965432823]
BooksMed is a novel framework based on a Large Language Model (LLM)
It emulates human cognitive processes to deliver evidence-based and reliable responses.
We present ExpertMedQA, a benchmark comprised of open-ended, expert-level clinical questions.
arXiv Detail & Related papers (2023-10-17T13:39:26Z) - Parameter-Efficient Fine-Tuning of LLaMA for the Clinical Domain [13.912870728383396]
Adapting pretrained language models to novel domains, such as clinical applications, traditionally involves retraining their entire set of parameters.
We propose a two-step PEFT framework and evaluate it in the clinical domain.
arXiv Detail & Related papers (2023-07-06T15:06:41Z) - Almanac: Retrieval-Augmented Language Models for Clinical Medicine [1.5505279143287174]
We develop Almanac, a large language model framework augmented with retrieval capabilities for medical guideline and treatment recommendations.
Performance on a novel dataset of clinical scenarios evaluated by a panel of 5 board-certified and resident physicians demonstrates significant increases in factuality.
arXiv Detail & Related papers (2023-03-01T02:30:11Z) - Cross-Lingual Knowledge Transfer for Clinical Phenotyping [55.92262310716537]
We investigate cross-lingual knowledge transfer strategies to execute this task for clinics that do not use the English language.
We evaluate these strategies for a Greek and a Spanish clinic leveraging clinical notes from different clinical domains.
Our results show that using multilingual data overall improves clinical phenotyping models and can compensate for data sparseness.
arXiv Detail & Related papers (2022-08-03T08:33:21Z) - Benchmarking Automated Clinical Language Simplification: Dataset,
Algorithm, and Evaluation [48.87254340298189]
We construct a new dataset named MedLane to support the development and evaluation of automated clinical language simplification approaches.
We propose a new model called DECLARE that follows the human annotation procedure and achieves state-of-the-art performance.
arXiv Detail & Related papers (2020-12-04T06:09:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.