RadOnc-GPT: A Large Language Model for Radiation Oncology
- URL: http://arxiv.org/abs/2309.10160v3
- Date: Mon, 6 Nov 2023 01:59:00 GMT
- Title: RadOnc-GPT: A Large Language Model for Radiation Oncology
- Authors: Zhengliang Liu, Peilong Wang, Yiwei Li, Jason Holmes, Peng Shu, Lian
Zhang, Chenbin Liu, Ninghao Liu, Dajiang Zhu, Xiang Li, Quanzheng Li, Samir
H. Patel, Terence T. Sio, Tianming Liu, Wei Liu
- Abstract summary: RadOnc-GPT was finetuned on a large dataset of radiation oncology patient records from the Mayo Clinic in Arizona.
The model employs instruction tuning on three key tasks - generating radiotherapy treatment regimens, determining optimal radiation modalities, and providing diagnostic descriptions/ICD codes.
- Score: 42.92077650252404
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents RadOnc-GPT, a large language model specialized for
radiation oncology through advanced tuning methods. RadOnc-GPT was finetuned on
a large dataset of radiation oncology patient records from the Mayo Clinic in
Arizona. The model employs instruction tuning on three key tasks - generating
radiotherapy treatment regimens, determining optimal radiation modalities, and
providing diagnostic descriptions/ICD codes based on patient diagnostic
details. Evaluations conducted by comparing RadOnc-GPT outputs to general large
language model outputs showed higher ROUGE scores in these three tasks. The
study demonstrated the potential of using large language models fine-tuned
using domain-specific knowledge like RadOnc-GPT to achieve transformational
capabilities in highly specialized healthcare fields such as radiation
oncology. However, our model's clinical relevance requires confirmation, and it
specializes in only the aforementioned three specific tasks and lacks broader
applicability. Furthermore, its evaluation through ROUGE scores might not
reflect the true semantic and clinical accuracy - challenges we intend to
address in future research.
Related papers
- RAD-PHI2: Instruction Tuning PHI-2 for Radiology [2.774342358600601]
Small Language Models (SLMs) have shown remarkable performance in general domain language understanding, reasoning and coding tasks.
This study investigates the application of SLMs for general radiology knowledge specifically question answering related to understanding of symptoms.
By fine-tuning Phi-2 on both general domain tasks and radiology-specific tasks related to chest X-ray reports, we create Rad-Phi2.
arXiv Detail & Related papers (2024-03-12T17:27:22Z) - Large Model driven Radiology Report Generation with Clinical Quality
Reinforcement Learning [16.849933628738277]
Radiology report generation (RRG) has attracted significant attention due to its potential to reduce the workload of radiologists.
This paper introduces a novel RRG method, textbfLM-RRG, that integrates large models (LMs) with clinical quality reinforcement learning.
Experiments on the MIMIC-CXR and IU-Xray datasets demonstrate the superiority of our method over the state of the art.
arXiv Detail & Related papers (2024-03-11T13:47:11Z) - Large-scale Long-tailed Disease Diagnosis on Radiology Images [51.453990034460304]
RadDiag is a foundational model supporting 2D and 3D inputs across various modalities and anatomies.
Our dataset, RP3D-DiagDS, contains 40,936 cases with 195,010 scans covering 5,568 disorders.
arXiv Detail & Related papers (2023-12-26T18:20:48Z) - ChatRadio-Valuer: A Chat Large Language Model for Generalizable
Radiology Report Generation Based on Multi-institution and Multi-system Data [115.0747462486285]
ChatRadio-Valuer is a tailored model for automatic radiology report generation that learns generalizable representations.
The clinical dataset utilized in this study encompasses a remarkable total of textbf332,673 observations.
ChatRadio-Valuer consistently outperforms state-of-the-art models, especially ChatGPT (GPT-3.5-Turbo) and GPT-4 et al.
arXiv Detail & Related papers (2023-10-08T17:23:17Z) - Radiology-Llama2: Best-in-Class Large Language Model for Radiology [71.27700230067168]
This paper introduces Radiology-Llama2, a large language model specialized for radiology through a process known as instruction tuning.
Quantitative evaluations using ROUGE metrics on the MIMIC-CXR and OpenI datasets demonstrate that Radiology-Llama2 achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-08-29T17:44:28Z) - Radiology-GPT: A Large Language Model for Radiology [74.07944784968372]
We introduce Radiology-GPT, a large language model for radiology.
It demonstrates superior performance compared to general language models such as StableLM, Dolly and LLaMA.
It exhibits significant versatility in radiological diagnosis, research, and communication.
arXiv Detail & Related papers (2023-06-14T17:57:24Z) - Medical Image Captioning via Generative Pretrained Transformers [57.308920993032274]
We combine two language models, the Show-Attend-Tell and the GPT-3, to generate comprehensive and descriptive radiology records.
The proposed model is tested on two medical datasets, the Open-I, MIMIC-CXR, and the general-purpose MS-COCO.
arXiv Detail & Related papers (2022-09-28T10:27:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.