Radiology-Llama2: Best-in-Class Large Language Model for Radiology
- URL: http://arxiv.org/abs/2309.06419v1
- Date: Tue, 29 Aug 2023 17:44:28 GMT
- Title: Radiology-Llama2: Best-in-Class Large Language Model for Radiology
- Authors: Zhengliang Liu, Yiwei Li, Peng Shu, Aoxiao Zhong, Longtao Yang, Chao
Ju, Zihao Wu, Chong Ma, Jie Luo, Cheng Chen, Sekeun Kim, Jiang Hu, Haixing
Dai, Lin Zhao, Dajiang Zhu, Jun Liu, Wei Liu, Dinggang Shen, Tianming Liu,
Quanzheng Li, and Xiang Li
- Abstract summary: This paper introduces Radiology-Llama2, a large language model specialized for radiology through a process known as instruction tuning.
Quantitative evaluations using ROUGE metrics on the MIMIC-CXR and OpenI datasets demonstrate that Radiology-Llama2 achieves state-of-the-art performance.
- Score: 71.27700230067168
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper introduces Radiology-Llama2, a large language model specialized
for radiology through a process known as instruction tuning. Radiology-Llama2
is based on the Llama2 architecture and further trained on a large dataset of
radiology reports to generate coherent and clinically useful impressions from
radiological findings. Quantitative evaluations using ROUGE metrics on the
MIMIC-CXR and OpenI datasets demonstrate that Radiology-Llama2 achieves
state-of-the-art performance compared to other generative language models, with
a Rouge-1 score of 0.4834 on MIMIC-CXR and 0.4185 on OpenI. Additional
assessments by radiology experts highlight the model's strengths in
understandability, coherence, relevance, conciseness, and clinical utility. The
work illustrates the potential of localized language models designed and tuned
for specialized domains like radiology. When properly evaluated and deployed,
such models can transform fields like radiology by automating rote tasks and
enhancing human expertise.
Related papers
- RAD-PHI2: Instruction Tuning PHI-2 for Radiology [2.774342358600601]
Small Language Models (SLMs) have shown remarkable performance in general domain language understanding, reasoning and coding tasks.
This study investigates the application of SLMs for general radiology knowledge specifically question answering related to understanding of symptoms.
By fine-tuning Phi-2 on both general domain tasks and radiology-specific tasks related to chest X-ray reports, we create Rad-Phi2.
arXiv Detail & Related papers (2024-03-12T17:27:22Z) - Large Model driven Radiology Report Generation with Clinical Quality
Reinforcement Learning [16.849933628738277]
Radiology report generation (RRG) has attracted significant attention due to its potential to reduce the workload of radiologists.
This paper introduces a novel RRG method, textbfLM-RRG, that integrates large models (LMs) with clinical quality reinforcement learning.
Experiments on the MIMIC-CXR and IU-Xray datasets demonstrate the superiority of our method over the state of the art.
arXiv Detail & Related papers (2024-03-11T13:47:11Z) - ChatRadio-Valuer: A Chat Large Language Model for Generalizable
Radiology Report Generation Based on Multi-institution and Multi-system Data [115.0747462486285]
ChatRadio-Valuer is a tailored model for automatic radiology report generation that learns generalizable representations.
The clinical dataset utilized in this study encompasses a remarkable total of textbf332,673 observations.
ChatRadio-Valuer consistently outperforms state-of-the-art models, especially ChatGPT (GPT-3.5-Turbo) and GPT-4 et al.
arXiv Detail & Related papers (2023-10-08T17:23:17Z) - Evaluating Large Language Models for Radiology Natural Language
Processing [68.98847776913381]
The rise of large language models (LLMs) has marked a pivotal shift in the field of natural language processing (NLP)
This study seeks to bridge this gap by critically evaluating thirty two LLMs in interpreting radiology reports.
arXiv Detail & Related papers (2023-07-25T17:57:18Z) - Radiology-GPT: A Large Language Model for Radiology [74.07944784968372]
We introduce Radiology-GPT, a large language model for radiology.
It demonstrates superior performance compared to general language models such as StableLM, Dolly and LLaMA.
It exhibits significant versatility in radiological diagnosis, research, and communication.
arXiv Detail & Related papers (2023-06-14T17:57:24Z) - An Iterative Optimizing Framework for Radiology Report Summarization with ChatGPT [80.33783969507458]
The 'Impression' section of a radiology report is a critical basis for communication between radiologists and other physicians.
Recent studies have achieved promising results in automatic impression generation using large-scale medical text data.
These models often require substantial amounts of medical text data and have poor generalization performance.
arXiv Detail & Related papers (2023-04-17T17:13:42Z) - Medical Image Captioning via Generative Pretrained Transformers [57.308920993032274]
We combine two language models, the Show-Attend-Tell and the GPT-3, to generate comprehensive and descriptive radiology records.
The proposed model is tested on two medical datasets, the Open-I, MIMIC-CXR, and the general-purpose MS-COCO.
arXiv Detail & Related papers (2022-09-28T10:27:10Z) - Generating Radiology Reports via Memory-driven Transformer [38.30011851429407]
We propose to generate radiology reports with memory-driven Transformer.
Experimental results on two prevailing radiology report datasets, IU X-Ray and MIMIC-CXR.
arXiv Detail & Related papers (2020-10-30T04:08:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.