TCM-GPT: Efficient Pre-training of Large Language Models for Domain
Adaptation in Traditional Chinese Medicine
- URL: http://arxiv.org/abs/2311.01786v1
- Date: Fri, 3 Nov 2023 08:54:50 GMT
- Title: TCM-GPT: Efficient Pre-training of Large Language Models for Domain
Adaptation in Traditional Chinese Medicine
- Authors: Guoxing Yang, Jianyu Shi, Zan Wang, Xiaohong Liu, Guangyu Wang
- Abstract summary: We propose a novel TCMDA (TCM Domain Adaptation) approach, efficient pre-training with domain-specific corpus.
Specifically, we first construct a large TCM-specific corpus, TCM-Corpus-1B, by identifying domain keywords and retreving from general corpus.
Then, our TCMDA leverages the LoRA which freezes the pretrained model's weights and uses rank decomposition matrices to efficiently train specific dense layers for pre-training and fine-tuning.
- Score: 11.537289359051975
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Pre-training and fine-tuning have emerged as a promising paradigm across
various natural language processing (NLP) tasks. The effectiveness of
pretrained large language models (LLM) has witnessed further enhancement,
holding potential for applications in the field of medicine, particularly in
the context of Traditional Chinese Medicine (TCM). However, the application of
these general models to specific domains often yields suboptimal results,
primarily due to challenges like lack of domain knowledge, unique objectives,
and computational efficiency. Furthermore, their effectiveness in specialized
domains, such as Traditional Chinese Medicine, requires comprehensive
evaluation. To address the above issues, we propose a novel domain specific
TCMDA (TCM Domain Adaptation) approach, efficient pre-training with
domain-specific corpus. Specifically, we first construct a large TCM-specific
corpus, TCM-Corpus-1B, by identifying domain keywords and retreving from
general corpus. Then, our TCMDA leverages the LoRA which freezes the pretrained
model's weights and uses rank decomposition matrices to efficiently train
specific dense layers for pre-training and fine-tuning, efficiently aligning
the model with TCM-related tasks, namely TCM-GPT-7B. We further conducted
extensive experiments on two TCM tasks, including TCM examination and TCM
diagnosis. TCM-GPT-7B archived the best performance across both datasets,
outperforming other models by relative increments of 17% and 12% in accuracy,
respectively. To the best of our knowledge, our study represents the pioneering
validation of domain adaptation of a large language model with 7 billion
parameters in TCM domain. We will release both TCMCorpus-1B and TCM-GPT-7B
model once accepted to facilitate interdisciplinary development in TCM and NLP,
serving as the foundation for further study.
Related papers
- Enhancing the Traditional Chinese Medicine Capabilities of Large Language Model through Reinforcement Learning from AI Feedback [5.855520522078306]
We propose a framework to improve the performance of large language models for Traditional Chinese Medicine (TCM) tasks using only a small amount of data.
We use medical case data for supervised fine-tuning of the large model, making it initially capable of performing TCM tasks.
We further optimize the model's performance using reinforcement learning from AI feedback (RLAIF) to align it with the preference data.
arXiv Detail & Related papers (2024-11-01T04:19:55Z) - Intelligent Understanding of Large Language Models in Traditional Chinese Medicine Based on Prompt Engineering Framework [3.990633038739491]
We propose TCM-Prompt, a framework that integrates various pre-trained language models (PLMs), templates, tokenization, and verbalization methods.
We conducted experiments on disease classification, syndrome identification, herbal medicine recommendation, and general NLP tasks.
arXiv Detail & Related papers (2024-10-25T10:24:30Z) - PMT: Progressive Mean Teacher via Exploring Temporal Consistency for Semi-Supervised Medical Image Segmentation [51.509573838103854]
We propose a semi-supervised learning framework, termed Progressive Mean Teachers (PMT), for medical image segmentation.
Our PMT generates high-fidelity pseudo labels by learning robust and diverse features in the training process.
Experimental results on two datasets with different modalities, i.e., CT and MRI, demonstrate that our method outperforms the state-of-the-art medical image segmentation approaches.
arXiv Detail & Related papers (2024-09-08T15:02:25Z) - TCM-FTP: Fine-Tuning Large Language Models for Herbal Prescription Prediction [17.041413449854915]
Traditional Chinese medicine relies on specific combinations of herbs in prescriptions to treat symptoms and signs, a practice that spans thousands of years.
We introduce DigestDS, a new dataset containing practical medical records from experienced experts in digestive system diseases.
We also propose a method, TCM-FTP (TCM Fine-Tuning Pre-trained), to leverage pre-trained large language models (LLMs) through supervised fine-tuning on DigestDS.
arXiv Detail & Related papers (2024-07-15T08:06:37Z) - Qibo: A Large Language Model for Traditional Chinese Medicine [10.394665777883064]
In traditional Chinese medicine, there are challenges such as the essential differences between theory and modern medicine.
We propose a two-stage training approach that combines continuous pre-training and supervised fine-tuning.
A notable contribution of our study is the processing of a 2GB corpus dedicated to TCM.
arXiv Detail & Related papers (2024-03-24T07:48:05Z) - Towards a clinically accessible radiology foundation model: open-access and lightweight, with automated evaluation [113.5002649181103]
Training open-source small multimodal models (SMMs) to bridge competency gaps for unmet clinical needs in radiology.
For training, we assemble a large dataset of over 697 thousand radiology image-text pairs.
For evaluation, we propose CheXprompt, a GPT-4-based metric for factuality evaluation, and demonstrate its parity with expert evaluation.
The inference of LlaVA-Rad is fast and can be performed on a single V100 GPU in private settings, offering a promising state-of-the-art tool for real-world clinical applications.
arXiv Detail & Related papers (2024-03-12T18:12:02Z) - HuatuoGPT-II, One-stage Training for Medical Adaption of LLMs [61.41790586411816]
HuatuoGPT-II has shown state-of-the-art performance in Chinese medicine domain on a number of benchmarks.
It even outperforms proprietary models like ChatGPT and GPT-4 in some aspects, especially in Traditional Chinese Medicine.
arXiv Detail & Related papers (2023-11-16T10:56:24Z) - PMC-LLaMA: Towards Building Open-source Language Models for Medicine [62.39105735933138]
Large Language Models (LLMs) have showcased remarkable capabilities in natural language understanding.
LLMs struggle in domains that require precision, such as medical applications, due to their lack of domain-specific knowledge.
We describe the procedure for building a powerful, open-source language model specifically designed for medicine applications, termed as PMC-LLaMA.
arXiv Detail & Related papers (2023-04-27T18:29:05Z) - TCM-SD: A Benchmark for Probing Syndrome Differentiation via Natural
Language Processing [31.190757020836656]
We focus on the core task of the TCM diagnosis and treatment system -- syndrome differentiation (SD)
Our dataset contains 54,152 real-world clinical records covering 148 syndromes.
We propose a domain-specific pre-trained language model, called ZY-BERT.
arXiv Detail & Related papers (2022-03-21T09:59:54Z) - Domain Generalization on Medical Imaging Classification using Episodic
Training with Task Augmentation [62.49837463676111]
We propose a novel scheme of episodic training with task augmentation on medical imaging classification.
Motivated by the limited number of source domains in real-world medical deployment, we consider the unique task-level overfitting.
arXiv Detail & Related papers (2021-06-13T03:56:59Z) - Domain-Specific Language Model Pretraining for Biomedical Natural
Language Processing [73.37262264915739]
We show that for domains with abundant unlabeled text, such as biomedicine, pretraining language models from scratch results in substantial gains.
Our experiments show that domain-specific pretraining serves as a solid foundation for a wide range of biomedical NLP tasks.
arXiv Detail & Related papers (2020-07-31T00:04:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.