Multilingual Natural Language Processing Model for Radiology Reports --
The Summary is all you need!
- URL: http://arxiv.org/abs/2310.00100v4
- Date: Sat, 13 Jan 2024 15:44:00 GMT
- Title: Multilingual Natural Language Processing Model for Radiology Reports --
The Summary is all you need!
- Authors: Mariana Lindo, Ana Sofia Santos, Andr\'e Ferreira, Jianning Li, Gijs
Luijten, Gustavo Correia, Moon Kim, Benedikt Michael Schaarschmidt, Cornelius
Deuschl, Johannes Haubold, Jens Kleesiek, Jan Egger and Victor Alves
- Abstract summary: The generation of radiology impressions was automated by fine-tuning a model based on a multilingual text-to-text Transformer.
In a blind test, two board-certified radiologists indicated that for at least 70% of the system-generated summaries, the quality matched or exceeded the corresponding human-written summaries.
This study showed that the multilingual model outperformed other models that specialized in summarizing radiology reports in only one language, as well as models that were not specifically designed for summarizing radiology reports.
- Score: 2.4910932804601855
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The impression section of a radiology report summarizes important radiology
findings and plays a critical role in communicating these findings to
physicians. However, the preparation of these summaries is time-consuming and
error-prone for radiologists. Recently, numerous models for radiology report
summarization have been developed. Nevertheless, there is currently no model
that can summarize these reports in multiple languages. Such a model could
greatly improve future research and the development of Deep Learning models
that incorporate data from patients with different ethnic backgrounds. In this
study, the generation of radiology impressions in different languages was
automated by fine-tuning a model, publicly available, based on a multilingual
text-to-text Transformer to summarize findings available in English,
Portuguese, and German radiology reports. In a blind test, two board-certified
radiologists indicated that for at least 70% of the system-generated summaries,
the quality matched or exceeded the corresponding human-written summaries,
suggesting substantial clinical reliability. Furthermore, this study showed
that the multilingual model outperformed other models that specialized in
summarizing radiology reports in only one language, as well as models that were
not specifically designed for summarizing radiology reports, such as ChatGPT.
Related papers
- ReXErr: Synthesizing Clinically Meaningful Errors in Diagnostic Radiology Reports [1.9106067578277455]
We introduce ReXErr, a methodology that leverages Large Language Models to generate representative errors within chest X-ray reports.
We developed error categories that capture common mistakes in both human and AI-generated reports.
Our approach uses a novel sampling scheme to inject diverse errors while maintaining clinical plausibility.
arXiv Detail & Related papers (2024-09-17T01:42:39Z) - RaTEScore: A Metric for Radiology Report Generation [59.37561810438641]
This paper introduces a novel, entity-aware metric, as Radiological Report (Text) Evaluation (RaTEScore)
RaTEScore emphasizes crucial medical entities such as diagnostic outcomes and anatomical details, and is robust against complex medical synonyms and sensitive to negation expressions.
Our evaluations demonstrate that RaTEScore aligns more closely with human preference than existing metrics, validated both on established public benchmarks and our newly proposed RaTE-Eval benchmark.
arXiv Detail & Related papers (2024-06-24T17:49:28Z) - Large Model driven Radiology Report Generation with Clinical Quality
Reinforcement Learning [16.849933628738277]
Radiology report generation (RRG) has attracted significant attention due to its potential to reduce the workload of radiologists.
This paper introduces a novel RRG method, textbfLM-RRG, that integrates large models (LMs) with clinical quality reinforcement learning.
Experiments on the MIMIC-CXR and IU-Xray datasets demonstrate the superiority of our method over the state of the art.
arXiv Detail & Related papers (2024-03-11T13:47:11Z) - Consensus, dissensus and synergy between clinicians and specialist
foundation models in radiology report generation [32.26270073540666]
The worldwide shortage of radiologists restricts access to expert care and imposes heavy workloads.
Recent progress in automated report generation with vision-language models offer clear potential in ameliorating the situation.
We build a state-of-the-art report generation system for chest radiographs, $textitFlamingo-CXR, by fine-tuning a well-known vision-language foundation model on radiology data.
arXiv Detail & Related papers (2023-11-30T05:38:34Z) - ChatRadio-Valuer: A Chat Large Language Model for Generalizable
Radiology Report Generation Based on Multi-institution and Multi-system Data [115.0747462486285]
ChatRadio-Valuer is a tailored model for automatic radiology report generation that learns generalizable representations.
The clinical dataset utilized in this study encompasses a remarkable total of textbf332,673 observations.
ChatRadio-Valuer consistently outperforms state-of-the-art models, especially ChatGPT (GPT-3.5-Turbo) and GPT-4 et al.
arXiv Detail & Related papers (2023-10-08T17:23:17Z) - Radiology-Llama2: Best-in-Class Large Language Model for Radiology [71.27700230067168]
This paper introduces Radiology-Llama2, a large language model specialized for radiology through a process known as instruction tuning.
Quantitative evaluations using ROUGE metrics on the MIMIC-CXR and OpenI datasets demonstrate that Radiology-Llama2 achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-08-29T17:44:28Z) - Evaluating Large Language Models for Radiology Natural Language
Processing [68.98847776913381]
The rise of large language models (LLMs) has marked a pivotal shift in the field of natural language processing (NLP)
This study seeks to bridge this gap by critically evaluating thirty two LLMs in interpreting radiology reports.
arXiv Detail & Related papers (2023-07-25T17:57:18Z) - Radiology-GPT: A Large Language Model for Radiology [74.07944784968372]
We introduce Radiology-GPT, a large language model for radiology.
It demonstrates superior performance compared to general language models such as StableLM, Dolly and LLaMA.
It exhibits significant versatility in radiological diagnosis, research, and communication.
arXiv Detail & Related papers (2023-06-14T17:57:24Z) - Medical Image Captioning via Generative Pretrained Transformers [57.308920993032274]
We combine two language models, the Show-Attend-Tell and the GPT-3, to generate comprehensive and descriptive radiology records.
The proposed model is tested on two medical datasets, the Open-I, MIMIC-CXR, and the general-purpose MS-COCO.
arXiv Detail & Related papers (2022-09-28T10:27:10Z) - Learning Semi-Structured Representations of Radiology Reports [10.134080761449093]
Given a corpus of radiology reports, researchers are often interested in identifying a subset of reports describing a particular medical finding.
Recent studies proposed mapping free-text statements in radiology reports to semi-structured strings of terms taken from a limited vocabulary.
This paper aims to present an approach for the automatic generation of semi-structured representations of radiology reports.
arXiv Detail & Related papers (2021-12-20T18:53:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.