Path to Medical AGI: Unify Domain-specific Medical LLMs with the Lowest
Cost
- URL: http://arxiv.org/abs/2306.10765v1
- Date: Mon, 19 Jun 2023 08:15:14 GMT
- Title: Path to Medical AGI: Unify Domain-specific Medical LLMs with the Lowest
Cost
- Authors: Juexiao Zhou, Xiuying Chen, Xin Gao
- Abstract summary: Medical artificial general intelligence (AGI) aims to develop systems that can understand, learn, and apply knowledge across a wide range of tasks and domains.
Large language models (LLMs) represent a significant step towards AGI.
We propose Medical AGI (MedAGI), a paradigm to unify domain-specific medical LLMs with the lowest cost.
- Score: 18.4295882376915
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Medical artificial general intelligence (AGI) is an emerging field that aims
to develop systems specifically designed for medical applications that possess
the ability to understand, learn, and apply knowledge across a wide range of
tasks and domains. Large language models (LLMs) represent a significant step
towards AGI. However, training cross-domain LLMs in the medical field poses
significant challenges primarily attributed to the requirement of collecting
data from diverse domains. This task becomes particularly difficult due to
privacy restrictions and the scarcity of publicly available medical datasets.
Here, we propose Medical AGI (MedAGI), a paradigm to unify domain-specific
medical LLMs with the lowest cost, and suggest a possible path to achieve
medical AGI. With an increasing number of domain-specific professional
multimodal LLMs in the medical field being developed, MedAGI is designed to
automatically select appropriate medical models by analyzing users' questions
with our novel adaptive expert selection algorithm. It offers a unified
approach to existing LLMs in the medical field, eliminating the need for
retraining regardless of the introduction of new models. This characteristic
renders it a future-proof solution in the dynamically advancing medical domain.
To showcase the resilience of MedAGI, we conducted an evaluation across three
distinct medical domains: dermatology diagnosis, X-ray diagnosis, and analysis
of pathology pictures. The results demonstrated that MedAGI exhibited
remarkable versatility and scalability, delivering exceptional performance
across diverse domains. Our code is publicly available to facilitate further
research at https://github.com/JoshuaChou2018/MedAGI.
Related papers
- Parameter-Efficient Fine-Tuning Medical Multimodal Large Language Models for Medical Visual Grounding [9.144030136201476]
Multimodal large language models (MLLMs) inherit the superior text understanding capabilities of LLMs and extend these capabilities to multimodal scenarios.
These models achieve excellent results in the general domain of multimodal tasks.
However, in the medical domain, the substantial training costs and the requirement for extensive medical data pose challenges to the development of medical MLLMs.
arXiv Detail & Related papers (2024-10-31T11:07:26Z) - MMed-RAG: Versatile Multimodal RAG System for Medical Vision Language Models [49.765466293296186]
Recent progress in Medical Large Vision-Language Models (Med-LVLMs) has opened up new possibilities for interactive diagnostic tools.
Med-LVLMs often suffer from factual hallucination, which can lead to incorrect diagnoses.
We propose a versatile multimodal RAG system, MMed-RAG, designed to enhance the factuality of Med-LVLMs.
arXiv Detail & Related papers (2024-10-16T23:03:27Z) - COGNET-MD, an evaluation framework and dataset for Large Language Model benchmarks in the medical domain [1.6752458252726457]
Large Language Models (LLMs) constitute a breakthrough state-of-the-art Artificial Intelligence (AI) technology.
We outline Cognitive Network Evaluation Toolkit for Medical Domains (COGNET-MD)
We propose a scoring-framework with increased difficulty to assess the ability of LLMs in interpreting medical text.
arXiv Detail & Related papers (2024-05-17T16:31:56Z) - Med-MoE: Mixture of Domain-Specific Experts for Lightweight Medical Vision-Language Models [17.643421997037514]
We propose a novel framework that tackles both discriminative and generative multimodal medical tasks.
The learning of Med-MoE consists of three steps: multimodal medical alignment, instruction tuning and routing, and domain-specific MoE tuning.
Our model can achieve performance superior to or on par with state-of-the-art baselines.
arXiv Detail & Related papers (2024-04-16T02:35:17Z) - OmniMedVQA: A New Large-Scale Comprehensive Evaluation Benchmark for Medical LVLM [48.16696073640864]
We introduce OmniMedVQA, a novel comprehensive medical Visual Question Answering (VQA) benchmark.
All images in this benchmark are sourced from authentic medical scenarios.
We have found that existing LVLMs struggle to address these medical VQA problems effectively.
arXiv Detail & Related papers (2024-02-14T13:51:56Z) - ChiMed-GPT: A Chinese Medical Large Language Model with Full Training Regime and Better Alignment to Human Preferences [51.66185471742271]
We propose ChiMed-GPT, a benchmark LLM designed explicitly for Chinese medical domain.
ChiMed-GPT undergoes a comprehensive training regime with pre-training, SFT, and RLHF.
We analyze possible biases through prompting ChiMed-GPT to perform attitude scales regarding discrimination of patients.
arXiv Detail & Related papers (2023-11-10T12:25:32Z) - Artificial General Intelligence for Medical Imaging Analysis [92.3940918983821]
Large-scale Artificial General Intelligence (AGI) models have achieved unprecedented success in a variety of general domain tasks.
These models face notable challenges arising from the medical field's inherent complexities and unique characteristics.
This review aims to offer insights into the future implications of AGI in medical imaging, healthcare, and beyond.
arXiv Detail & Related papers (2023-06-08T18:04:13Z) - Towards Medical Artificial General Intelligence via Knowledge-Enhanced
Multimodal Pretraining [121.89793208683625]
Medical artificial general intelligence (MAGI) enables one foundation model to solve different medical tasks.
We propose a new paradigm called Medical-knedge-enhanced mulTimOdal pretRaining (MOTOR)
arXiv Detail & Related papers (2023-04-26T01:26:19Z) - Universal Model for Multi-Domain Medical Image Retrieval [88.67940265012638]
Medical Image Retrieval (MIR) helps doctors quickly find similar patients' data.
MIR is becoming increasingly helpful due to the wide use of digital imaging modalities.
However, the popularity of various digital imaging modalities in hospitals also poses several challenges to MIR.
arXiv Detail & Related papers (2020-07-14T23:22:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.