Quantifying Self-diagnostic Atomic Knowledge in Chinese Medical Foundation Model: A Computational Analysis
- URL: http://arxiv.org/abs/2310.11722v3
- Date: Tue, 2 Apr 2024 02:48:22 GMT
- Title: Quantifying Self-diagnostic Atomic Knowledge in Chinese Medical Foundation Model: A Computational Analysis
- Authors: Yaxin Fan, Feng Jiang, Benyou Wang, Peifeng Li, Haizhou Li,
- Abstract summary: Foundation Models (FMs) have the potential to revolutionize the way users self-diagnose through search engines by offering direct and efficient suggestions.
Recent studies primarily focused on the quality of FMs evaluated by GPT-4 or their ability to pass medical exams.
No studies have quantified the extent of self-diagnostic atomic knowledge stored in FMs' memory.
- Score: 55.742339781494046
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Foundation Models (FMs) have the potential to revolutionize the way users self-diagnose through search engines by offering direct and efficient suggestions. Recent studies primarily focused on the quality of FMs evaluated by GPT-4 or their ability to pass medical exams, no studies have quantified the extent of self-diagnostic atomic knowledge stored in FMs' memory, which is the basis of foundation models to provide factual and reliable suggestions. In this paper, we first constructed a benchmark of Self-diagnostic Atomic Knowledge (SdAK), including the most common types of atomic knowledge involved in self-diagnostic queries, with 17 atomic types and a total of 14, 048 pieces of atomic knowledge. Then, we evaluated both generic and open-source Chinese medical FMs on the benchmark. The experimental results showcase that generic FMs perform better than medical FMs in terms of self-diagnostic atomic knowledge. Error analysis revealed that both generic and medical FMs are sycophantic, e.g., always catering to users' claims when it comes to unknown knowledge. We further explored different types of data commonly adopted for fine-tuning medical FMs, i.e., real-world, semi-distilled, and distilled data, and found that distilled data can benefit FMs most. The code and data are available at https://github.com/FreedomIntelligence/SDAK.
Related papers
- FEDMEKI: A Benchmark for Scaling Medical Foundation Models via Federated Knowledge Injection [83.54960238236548]
FEDMEKI not only preserves data privacy but also enhances the capability of medical foundation models.
FEDMEKI allows medical foundation models to learn from a broader spectrum of medical knowledge without direct data exposure.
arXiv Detail & Related papers (2024-08-17T15:18:56Z) - FairMedFM: Fairness Benchmarking for Medical Imaging Foundation Models [37.803490266325]
We introduce FairMedFM, a fairness benchmark for foundation models (FMs) research in medical imaging.
FairMedFM integrates with 17 popular medical imaging datasets, encompassing different modalities, dimensionalities, and sensitive attributes.
It explores 20 widely used FMs, with various usages such as zero-shot learning, linear probing, parameter-efficient fine-tuning, and prompting in various downstream tasks -- classification and segmentation.
arXiv Detail & Related papers (2024-07-01T05:47:58Z) - Progress and Opportunities of Foundation Models in Bioinformatics [77.74411726471439]
Foundations models (FMs) have ushered in a new era in computational biology, especially in the realm of deep learning.
Central to our focus is the application of FMs to specific biological problems, aiming to guide the research community in choosing appropriate FMs for their research needs.
Review analyses challenges and limitations faced by FMs in biology, such as data noise, model explainability, and potential biases.
arXiv Detail & Related papers (2024-02-06T02:29:17Z) - MKA: A Scalable Medical Knowledge Assisted Mechanism for Generative
Models on Medical Conversation Tasks [3.9571320117430866]
The mechanism aims to assist general neural generative models to achieve better performance on the medical conversation task.
The medical-specific knowledge graph is designed within the mechanism, which contains 6 types of medical-related information.
The evaluation results demonstrate that models combined with our mechanism outperform original methods in multiple automatic evaluation metrics.
arXiv Detail & Related papers (2023-12-05T04:55:54Z) - Can Generalist Foundation Models Outcompete Special-Purpose Tuning? Case
Study in Medicine [89.46836590149883]
We build on a prior study of GPT-4's capabilities on medical challenge benchmarks in the absence of special training.
We find that prompting innovation can unlock deeper specialist capabilities and show that GPT-4 easily tops prior leading results for medical benchmarks.
With Medprompt, GPT-4 achieves state-of-the-art results on all nine of the benchmark datasets in the MultiMedQA suite.
arXiv Detail & Related papers (2023-11-28T03:16:12Z) - ChatRadio-Valuer: A Chat Large Language Model for Generalizable
Radiology Report Generation Based on Multi-institution and Multi-system Data [115.0747462486285]
ChatRadio-Valuer is a tailored model for automatic radiology report generation that learns generalizable representations.
The clinical dataset utilized in this study encompasses a remarkable total of textbf332,673 observations.
ChatRadio-Valuer consistently outperforms state-of-the-art models, especially ChatGPT (GPT-3.5-Turbo) and GPT-4 et al.
arXiv Detail & Related papers (2023-10-08T17:23:17Z) - Dynamic Multi-Domain Knowledge Networks for Chest X-ray Report
Generation [0.5939858158928474]
We propose a Dynamic Multi-Domain Knowledge(DMDK) network for radiology diagnostic report generation.
The DMDK network consists of four modules: Chest Feature Extractor(CFE), Dynamic Knowledge Extractor(DKE), Specific Knowledge Extractor(SKE), and Multi-knowledge Integrator(MKI) module.
We performed extensive experiments on two widely used datasets, IU X-Ray and MIMIC-CXR.
arXiv Detail & Related papers (2023-10-08T11:20:02Z) - PMC-LLaMA: Towards Building Open-source Language Models for Medicine [62.39105735933138]
Large Language Models (LLMs) have showcased remarkable capabilities in natural language understanding.
LLMs struggle in domains that require precision, such as medical applications, due to their lack of domain-specific knowledge.
We describe the procedure for building a powerful, open-source language model specifically designed for medicine applications, termed as PMC-LLaMA.
arXiv Detail & Related papers (2023-04-27T18:29:05Z) - ChatDoctor: A Medical Chat Model Fine-Tuned on a Large Language Model
Meta-AI (LLaMA) Using Medical Domain Knowledge [8.584905227066034]
The aim of this research was to create a specialized language model with enhanced accuracy in medical advice.
We achieved this by adapting and refining the large language model meta-AI (LLaMA) using a large dataset of 100,000 patient-doctor dialogues.
The fine-tuning of the model with real-world patient-doctor interactions significantly improved the model's ability to understand patient needs and provide informed advice.
arXiv Detail & Related papers (2023-03-24T15:29:16Z) - Knowledge-Empowered Representation Learning for Chinese Medical Reading
Comprehension: Task, Model and Resources [36.960318276653986]
We introduce a multi-target MRC task for the medical domain, whose goal is to predict answers to medical questions and the corresponding support sentences simultaneously.
We propose the Chinese medical BERT model for the task (CMedBERT), which fuses medical knowledge into pre-trained language models.
Experiments show that CMedBERT consistently outperforms strong baselines by fusing context-aware and knowledge-aware token representations.
arXiv Detail & Related papers (2020-08-24T11:23:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.