RoKEPG: RoBERTa and Knowledge Enhancement for Prescription Generation of
Traditional Chinese Medicine
- URL: http://arxiv.org/abs/2311.17307v1
- Date: Wed, 29 Nov 2023 01:59:38 GMT
- Title: RoKEPG: RoBERTa and Knowledge Enhancement for Prescription Generation of
Traditional Chinese Medicine
- Authors: Hua Pu, Jiacong Mi, Shan Lu, Jieyue He
- Abstract summary: We propose a RoBERTa and Knowledge Enhancement model for Prescription Generation of Traditional Chinese Medicine (RoKEPG)
RoKEPG is guided to generate TCM prescriptions by introducing four classes of knowledge of TCM through the attention mask matrix.
Experimental results on the publicly available TCM prescription dataset show that RoKEPG improves the F1 metric by about 2% over the baseline model.
- Score: 2.1098688291287475
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Traditional Chinese medicine (TCM) prescription is the most critical form of
TCM treatment, and uncovering the complex nonlinear relationship between
symptoms and TCM is of great significance for clinical practice and assisting
physicians in diagnosis and treatment. Although there have been some studies on
TCM prescription generation, these studies consider a single factor and
directly model the symptom-prescription generation problem mainly based on
symptom descriptions, lacking guidance from TCM knowledge. To this end, we
propose a RoBERTa and Knowledge Enhancement model for Prescription Generation
of Traditional Chinese Medicine (RoKEPG). RoKEPG is firstly pre-trained by our
constructed TCM corpus, followed by fine-tuning the pre-trained model, and the
model is guided to generate TCM prescriptions by introducing four classes of
knowledge of TCM through the attention mask matrix. Experimental results on the
publicly available TCM prescription dataset show that RoKEPG improves the F1
metric by about 2% over the baseline model with the best results.
Related papers
- Natural Language-Assisted Multi-modal Medication Recommendation [97.07805345563348]
We introduce the Natural Language-Assisted Multi-modal Medication Recommendation(NLA-MMR)
The NLA-MMR is a multi-modal alignment framework designed to learn knowledge from the patient view and medication view jointly.
In this vein, we employ pretrained language models(PLMs) to extract in-domain knowledge regarding patients and medications.
arXiv Detail & Related papers (2025-01-13T09:51:50Z) - Hengqin-RA-v1: Advanced Large Language Model for Diagnosis and Treatment of Rheumatoid Arthritis with Dataset based Traditional Chinese Medicine [9.423846262482716]
This paper introduces Hengqin-RA-v1, the first large language model specifically tailored for Traditional Chinese Medicine (TCM)
We also present HQ-GCM-RA-C1, a comprehensive RA-specific dataset curated from ancient Chinese medical literature, classical texts, and modern clinical studies.
arXiv Detail & Related papers (2025-01-05T07:46:51Z) - BianCang: A Traditional Chinese Medicine Large Language Model [22.582027277167047]
BianCang is a TCM-specific large language model (LLMs) that first injects domain-specific knowledge and then aligns it through targeted stimulation.
We constructed pre-training corpora, instruction-aligned datasets based on real hospital records, and the ChP-TCM dataset derived from the Pharmacopoeia of the People's Republic of China.
We compiled extensive TCM and medical corpora for continuous pre-training and supervised fine-tuning, building a comprehensive dataset to refine the model's understanding of TCM.
arXiv Detail & Related papers (2024-11-17T10:17:01Z) - TCM-FTP: Fine-Tuning Large Language Models for Herbal Prescription Prediction [17.041413449854915]
Traditional Chinese medicine (TCM) has relied on specific combinations of herbs in prescriptions to treat various symptoms and signs for thousands of years.
Predicting TCM prescriptions poses a fascinating technical challenge with significant practical implications.
We introduce textitDigestDS, a novel dataset comprising practical medical records from experienced experts in digestive system diseases.
We also propose a method, TCM-FTP (TCM Fine-Tuning Pre-trained), to leverage pre-trained large language models (LLMs) via supervised fine-tuning on textitDigestDS.
arXiv Detail & Related papers (2024-07-15T08:06:37Z) - Leave No Patient Behind: Enhancing Medication Recommendation for Rare Disease Patients [47.68396964741116]
We propose a novel model called Robust and Accurate REcommendations for Medication (RAREMed) to enhance accuracy for rare diseases.
It employs a transformer encoder with a unified input sequence approach to capture complex relationships among disease and procedure codes.
It provides accurate drug sets for both rare and common disease patients, thereby mitigating unfairness in medication recommendation systems.
arXiv Detail & Related papers (2024-03-26T14:36:22Z) - ChiMed-GPT: A Chinese Medical Large Language Model with Full Training Regime and Better Alignment to Human Preferences [51.66185471742271]
We propose ChiMed-GPT, a benchmark LLM designed explicitly for Chinese medical domain.
ChiMed-GPT undergoes a comprehensive training regime with pre-training, SFT, and RLHF.
We analyze possible biases through prompting ChiMed-GPT to perform attitude scales regarding discrimination of patients.
arXiv Detail & Related papers (2023-11-10T12:25:32Z) - TCM-GPT: Efficient Pre-training of Large Language Models for Domain
Adaptation in Traditional Chinese Medicine [11.537289359051975]
We propose a novel TCMDA (TCM Domain Adaptation) approach, efficient pre-training with domain-specific corpus.
Specifically, we first construct a large TCM-specific corpus, TCM-Corpus-1B, by identifying domain keywords and retreving from general corpus.
Then, our TCMDA leverages the LoRA which freezes the pretrained model's weights and uses rank decomposition matrices to efficiently train specific dense layers for pre-training and fine-tuning.
arXiv Detail & Related papers (2023-11-03T08:54:50Z) - Sequential Condition Evolved Interaction Knowledge Graph for Traditional
Chinese Medicine Recommendation [9.953064118341812]
Traditional Chinese Medicine (TCM) has a rich history of utilizing natural herbs to treat a diversity of illnesses.
Existing TCM recommendation approaches overlook the changes in patient status and only explore potential patterns between symptoms and prescriptions.
We propose a novel framework that treats the model as a sequential prescription-making problem by considering the dynamics of the patient's condition.
arXiv Detail & Related papers (2023-05-29T03:13:39Z) - Medical-VLBERT: Medical Visual Language BERT for COVID-19 CT Report
Generation With Alternate Learning [70.71564065885542]
We propose to use the medical visual language BERT (Medical-VLBERT) model to identify the abnormality on the COVID-19 scans.
This model adopts an alternate learning strategy with two procedures that are knowledge pretraining and transferring.
For automatic medical report generation on the COVID-19 cases, we constructed a dataset of 368 medical findings in Chinese and 1104 chest CT scans.
arXiv Detail & Related papers (2021-08-11T07:12:57Z) - Learning-based Computer-aided Prescription Model for Parkinson's
Disease: A Data-driven Perspective [61.70045118068213]
We build a dataset by collecting symptoms of PD patients, and their prescription drug provided by neurologists.
Then, we build a novel computer-aided prescription model by learning the relation between observed symptoms and prescription drug.
For the new coming patients, we could recommend (predict) suitable prescription drug on their observed symptoms by our prescription model.
arXiv Detail & Related papers (2020-07-31T14:34:35Z) - Syndrome-aware Herb Recommendation with Multi-Graph Convolution Network [49.85331664178196]
Herb recommendation plays a crucial role in the therapeutic process of Traditional Chinese Medicine.
We propose a new method that takes the implicit syndrome induction process into account for herb recommendation.
arXiv Detail & Related papers (2020-02-20T05:56:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.