OpenMedLM: Prompt engineering can out-perform fine-tuning in medical
question-answering with open-source large language models
- URL: http://arxiv.org/abs/2402.19371v1
- Date: Thu, 29 Feb 2024 17:19:39 GMT
- Title: OpenMedLM: Prompt engineering can out-perform fine-tuning in medical
question-answering with open-source large language models
- Authors: Jenish Maharjan, Anurag Garikipati, Navan Preet Singh, Leo Cyrus,
Mayank Sharma, Madalina Ciobanu, Gina Barnes, Rahul Thapa, Qingqing Mao,
Ritankar Das
- Abstract summary: Open-source (OS) models represent a key area of growth for medical LLMs.
We present OpenMedLM, a prompting platform which delivers state-of-the-art (SOTA) performance for OS LLMs on medical benchmarks.
- Score: 4.556924372105915
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: LLMs have become increasingly capable at accomplishing a range of
specialized-tasks and can be utilized to expand equitable access to medical
knowledge. Most medical LLMs have involved extensive fine-tuning, leveraging
specialized medical data and significant, thus costly, amounts of computational
power. Many of the top performing LLMs are proprietary and their access is
limited to very few research groups. However, open-source (OS) models represent
a key area of growth for medical LLMs due to significant improvements in
performance and an inherent ability to provide the transparency and compliance
required in healthcare. We present OpenMedLM, a prompting platform which
delivers state-of-the-art (SOTA) performance for OS LLMs on medical benchmarks.
We evaluated a range of OS foundation LLMs (7B-70B) on four medical benchmarks
(MedQA, MedMCQA, PubMedQA, MMLU medical-subset). We employed a series of
prompting strategies, including zero-shot, few-shot, chain-of-thought (random
selection and kNN selection), and ensemble/self-consistency voting. We found
that OpenMedLM delivers OS SOTA results on three common medical LLM benchmarks,
surpassing the previous best performing OS models that leveraged
computationally costly extensive fine-tuning. The model delivers a 72.6%
accuracy on the MedQA benchmark, outperforming the previous SOTA by 2.4%, and
achieves 81.7% accuracy on the MMLU medical-subset, establishing itself as the
first OS LLM to surpass 80% accuracy on this benchmark. Our results highlight
medical-specific emergent properties in OS LLMs which have not yet been
documented to date elsewhere, and showcase the benefits of further leveraging
prompt engineering to improve the performance of accessible LLMs for medical
applications.
Related papers
- Understanding the Role of LLMs in Multimodal Evaluation Benchmarks [77.59035801244278]
This paper investigates the role of the Large Language Model (LLM) backbone in Multimodal Large Language Models (MLLMs) evaluation.
Our study encompasses four diverse MLLM benchmarks and eight state-of-the-art MLLMs.
Key findings reveal that some benchmarks allow high performance even without visual inputs and up to 50% of error rates can be attributed to insufficient world knowledge in the LLM backbone.
arXiv Detail & Related papers (2024-10-16T07:49:13Z) - Development and bilingual evaluation of Japanese medical large language model within reasonably low computational resources [0.0]
We present a medical adaptation based on the recent 7B models, which enables the operation in low computational resources.
We find that fine-tuning an English-centric base model on Japanese medical dataset improves the score in both language.
arXiv Detail & Related papers (2024-09-18T08:07:37Z) - Lightweight Large Language Model for Medication Enquiry: Med-Pal [2.3095351248532268]
Large Language Models (LLMs) have emerged as a potential solution to assist digital health development with patient education.
We trained and validated Med-Pal, a medication domain-specific LLM-chatbot fine-tuned with a fine-grained and expert curated dataset.
arXiv Detail & Related papers (2024-07-02T03:32:39Z) - Med42 -- Evaluating Fine-Tuning Strategies for Medical LLMs: Full-Parameter vs. Parameter-Efficient Approaches [7.3384872719063114]
We develop and refined a series of medical Large Language Models (LLMs) based on the Llama-2 architecture.
Our experiments systematically evaluate the effectiveness of these tuning strategies across various well-known medical benchmarks.
arXiv Detail & Related papers (2024-04-23T06:36:21Z) - Large Language Model Distilling Medication Recommendation Model [61.89754499292561]
We harness the powerful semantic comprehension and input-agnostic characteristics of Large Language Models (LLMs)
Our research aims to transform existing medication recommendation methodologies using LLMs.
To mitigate this, we have developed a feature-level knowledge distillation technique, which transfers the LLM's proficiency to a more compact model.
arXiv Detail & Related papers (2024-02-05T08:25:22Z) - MEDITRON-70B: Scaling Medical Pretraining for Large Language Models [91.25119823784705]
Large language models (LLMs) can potentially democratize access to medical knowledge.
We release MEDITRON: a suite of open-source LLMs with 7B and 70B parameters adapted to the medical domain.
arXiv Detail & Related papers (2023-11-27T18:49:43Z) - A Survey of Large Language Models in Medicine: Progress, Application, and Challenge [85.09998659355038]
Large language models (LLMs) have received substantial attention due to their capabilities for understanding and generating human language.
This review aims to provide a detailed overview of the development and deployment of LLMs in medicine.
arXiv Detail & Related papers (2023-11-09T02:55:58Z) - MKRAG: Medical Knowledge Retrieval Augmented Generation for Medical Question Answering [45.84961106102445]
Large Language Models (LLMs) often perform poorly on domain-specific tasks such as medical question answering (QA)
We propose a comprehensive retrieval strategy to extract medical facts from an external knowledge base, and then inject them into the LLM's query prompt.
Our retrieval-augmented Vicuna-7B model exhibited an accuracy improvement from 44.46% to 48.54%.
arXiv Detail & Related papers (2023-09-27T21:26:03Z) - Augmenting Black-box LLMs with Medical Textbooks for Biomedical Question Answering (Published in Findings of EMNLP 2024) [48.17095875619711]
We present a system called LLMs Augmented with Medical Textbooks (LLM-AMT)
LLM-AMT integrates authoritative medical textbooks into the LLMs' framework using plug-and-play modules.
We found that medical textbooks as a retrieval corpus is proven to be a more effective knowledge database than Wikipedia in the medical domain.
arXiv Detail & Related papers (2023-09-05T13:39:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.