On the Performance of an Explainable Language Model on PubMedQA
- URL: http://arxiv.org/abs/2504.05074v1
- Date: Mon, 07 Apr 2025 13:42:02 GMT
- Title: On the Performance of an Explainable Language Model on PubMedQA
- Authors: Venkat Srinivasan, Vishaal Jatav, Anushka Chandrababu, Geetika Sharma,
- Abstract summary: We report results from Gyan, an explainable language model based on an alternative architecture, on the PubmedQA data set.<n>Gyan is trustable, transparent, does not hallucinate and does not require significant training or compute resources.
- Score: 1.1484381570538684
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Large language models (LLMs) have shown significant abilities in retrieving medical knowledge, reasoning over it and answering medical questions comparably to physicians. However, these models are not interpretable, hallucinate, are difficult to maintain and require enormous compute resources for training and inference. In this paper, we report results from Gyan, an explainable language model based on an alternative architecture, on the PubmedQA data set. The Gyan LLM is a compositional language model and the model is decoupled from knowledge. Gyan is trustable, transparent, does not hallucinate and does not require significant training or compute resources. Gyan is easily transferable across domains. Gyan-4.3 achieves SOTA results on PubmedQA with 87.1% accuracy compared to 82% by MedPrompt based on GPT-4 and 81.8% by Med-PaLM 2 (Google and DeepMind). We will be reporting results for other medical data sets - MedQA, MedMCQA, MMLU - Medicine in the future.
Related papers
- MEG: Medical Knowledge-Augmented Large Language Models for Question Answering [37.3562521243773]
We present MEG, a parameter-efficient approach for medical knowledge-augmented LLMs.
We evaluate our method on four popular medical multiple-choice datasets.
arXiv Detail & Related papers (2024-11-06T12:57:58Z) - MedGo: A Chinese Medical Large Language Model [20.770607085079195]
This paper presents a Chinese medical large language model, MedGo.
MedGo was trained using a combination of high quality unsupervised medical data, supervised data, and preference alignment data.
The results demonstrate that MedGo achieved promising performance across various Chinese medical information processing tasks.
arXiv Detail & Related papers (2024-10-27T12:52:52Z) - Efficient Medical Question Answering with Knowledge-Augmented Question Generation [5.145812785735094]
We introduce a method to improve the proficiency of a small language model in the medical domain by employing a two-fold approach.
We first fine-tune the model on a corpus of medical textbooks.
Then, we use GPT-4 to generate questions similar to the downstream task, prompted with textbook knowledge, and use them to fine-tune the model.
arXiv Detail & Related papers (2024-05-23T14:53:52Z) - Capabilities of Gemini Models in Medicine [100.60391771032887]
We introduce Med-Gemini, a family of highly capable multimodal models specialized in medicine.
We evaluate Med-Gemini on 14 medical benchmarks, establishing new state-of-the-art (SoTA) performance on 10 of them.
Our results offer compelling evidence for Med-Gemini's potential, although further rigorous evaluation will be crucial before real-world deployment.
arXiv Detail & Related papers (2024-04-29T04:11:28Z) - Medical Vision-Language Pre-Training for Brain Abnormalities [96.1408455065347]
We show how to automatically collect medical image-text aligned data for pretraining from public resources such as PubMed.
In particular, we present a pipeline that streamlines the pre-training process by initially collecting a large brain image-text dataset.
We also investigate the unique challenge of mapping subfigures to subcaptions in the medical domain.
arXiv Detail & Related papers (2024-04-27T05:03:42Z) - BioMedLM: A 2.7B Parameter Language Model Trained On Biomedical Text [82.7001841679981]
BioMedLM is a 2.7 billion parameter GPT-style autoregressive model trained exclusively on PubMed abstracts and full articles.
When fine-tuned, BioMedLM can produce strong multiple-choice biomedical question-answering results competitive with larger models.
BioMedLM can also be fine-tuned to produce useful answers to patient questions on medical topics.
arXiv Detail & Related papers (2024-03-27T10:18:21Z) - MedAlign: A Clinician-Generated Dataset for Instruction Following with
Electronic Medical Records [60.35217378132709]
Large language models (LLMs) can follow natural language instructions with human-level fluency.
evaluating LLMs on realistic text generation tasks for healthcare remains challenging.
We introduce MedAlign, a benchmark dataset of 983 natural language instructions for EHR data.
arXiv Detail & Related papers (2023-08-27T12:24:39Z) - Customizing General-Purpose Foundation Models for Medical Report
Generation [64.31265734687182]
The scarcity of labelled medical image-report pairs presents great challenges in the development of deep and large-scale neural networks.
We propose customizing off-the-shelf general-purpose large-scale pre-trained models, i.e., foundation models (FMs) in computer vision and natural language processing.
arXiv Detail & Related papers (2023-06-09T03:02:36Z) - HuatuoGPT, towards Taming Language Model to Be a Doctor [67.96794664218318]
HuatuoGPT is a large language model (LLM) for medical consultation.
We leverage both textitdistilled data from ChatGPT and textitreal-world data from doctors in the supervised fine-tuned stage.
arXiv Detail & Related papers (2023-05-24T11:56:01Z) - Can large language models reason about medical questions? [7.95779617839642]
We investigate whether close- and open-source models can be applied to answer and reason about difficult real-world-based questions.
We focus on three popular medical benchmarks (MedQA-USMLE, MedMCQA, and PubMedQA) and multiple prompting scenarios.
Based on an expert annotation of the generated CoTs, we found that InstructGPT can often read, reason and recall expert knowledge.
arXiv Detail & Related papers (2022-07-17T11:24:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.