Domain-adapted large language models for classifying nuclear medicine
reports
- URL: http://arxiv.org/abs/2303.01258v1
- Date: Wed, 1 Mar 2023 09:48:39 GMT
- Title: Domain-adapted large language models for classifying nuclear medicine
reports
- Authors: Zachary Huemann, Changhee Lee, Junjie Hu, Steve Y. Cho, Tyler Bradshaw
- Abstract summary: We retrospectively retrieved 4542 text reports and 1664 images for FDG PET/CT lymphoma exams from 2008-2018.
Multiple general-purpose transformer language models were used to classify the reports into Deauville scores 1-5.
We adapted the models to the nuclear medicine domain using masked language modeling and assessed its impact on classification performance.
- Score: 11.364745410780678
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: With the growing use of transformer-based language models in medicine, it is
unclear how well these models generalize to nuclear medicine which has
domain-specific vocabulary and unique reporting styles. In this study, we
evaluated the value of domain adaptation in nuclear medicine by adapting
language models for the purpose of 5-point Deauville score prediction based on
clinical 18F-fluorodeoxyglucose (FDG) PET/CT reports. We retrospectively
retrieved 4542 text reports and 1664 images for FDG PET/CT lymphoma exams from
2008-2018 in our clinical imaging database. Deauville scores were removed from
the reports and then the remaining text in the reports was used as the model
input. Multiple general-purpose transformer language models were used to
classify the reports into Deauville scores 1-5. We then adapted the models to
the nuclear medicine domain using masked language modeling and assessed its
impact on classification performance. The language models were compared against
vision models, a multimodal vision language model, and a nuclear medicine
physician with seven-fold Monte Carlo cross validation, reported are the mean
and standard deviations. Domain adaption improved all language models. For
example, BERT improved from 61.3% five-class accuracy to 65.7% following domain
adaptation. The best performing model (domain-adapted RoBERTa) achieved a
five-class accuracy of 77.4%, which was better than the physician's performance
(66%), the best vision model's performance (48.1), and was similar to the
multimodal model's performance (77.2). Domain adaptation improved the
performance of large language models in interpreting nuclear medicine text
reports.
Related papers
- Enhancing Clinical Text Classification via Fine-Tuned DRAGON Longformer Models [7.514574388197471]
This study explores the optimization of the DRAGON Longformer base model for clinical text classification.<n>A dataset of 500 clinical cases containing structured medical observations was used.<n>The optimized model achieved notable performance gains.
arXiv Detail & Related papers (2025-07-13T03:10:19Z) - Evaluating Vision Language Model Adaptations for Radiology Report Generation in Low-Resource Languages [1.3699492682906507]
Language-specific models substantially outperformed both general and domain-specific models in generating radiology reports.<n>Models fine-tuned with medical terminology exhibited enhanced performance across all languages.
arXiv Detail & Related papers (2025-05-02T08:14:03Z) - CXR-Agent: Vision-language models for chest X-ray interpretation with uncertainty aware radiology reporting [0.0]
We evaluate the publicly available, state of the art, foundational vision-language models for chest X-ray interpretation.
We find that vision-language models often hallucinate with confident language, which slows down clinical interpretation.
We develop an agent-based vision-language approach for report generation using CheXagent's linear probes and BioViL-T's phrase grounding tools.
arXiv Detail & Related papers (2024-07-11T18:39:19Z) - PeFoMed: Parameter Efficient Fine-tuning of Multimodal Large Language Models for Medical Imaging [8.043625583479598]
Multimodal large language models (MLLMs) represent an evolutionary expansion in the capabilities of traditional large language models.
Recent works investigate the adaptation of MLLMs as a universal solution to address medical multi-modal problems as a generative task.
We propose a parameter efficient framework for fine-tuning MLLMs, specifically validated on medical visual question answering (Med-VQA) and medical report generation (MRG) tasks.
arXiv Detail & Related papers (2024-01-05T13:22:12Z) - ChatRadio-Valuer: A Chat Large Language Model for Generalizable
Radiology Report Generation Based on Multi-institution and Multi-system Data [115.0747462486285]
ChatRadio-Valuer is a tailored model for automatic radiology report generation that learns generalizable representations.
The clinical dataset utilized in this study encompasses a remarkable total of textbf332,673 observations.
ChatRadio-Valuer consistently outperforms state-of-the-art models, especially ChatGPT (GPT-3.5-Turbo) and GPT-4 et al.
arXiv Detail & Related papers (2023-10-08T17:23:17Z) - Customizing General-Purpose Foundation Models for Medical Report
Generation [64.31265734687182]
The scarcity of labelled medical image-report pairs presents great challenges in the development of deep and large-scale neural networks.
We propose customizing off-the-shelf general-purpose large-scale pre-trained models, i.e., foundation models (FMs) in computer vision and natural language processing.
arXiv Detail & Related papers (2023-06-09T03:02:36Z) - ConTEXTual Net: A Multimodal Vision-Language Model for Segmentation of
Pneumothorax [5.168314889999992]
We propose a novel vision-language model, ConTEXTual Net, for the task of pneumothorax segmentation on chest radiographs.
We trained it on the CANDID-PTX dataset consisting of 3,196 positive cases of pneumothorax.
It achieved a Dice score of 0.716$pm$0.016, which was similar to the degree of inter-reader variability.
It outperformed both vision-only models and a competing vision-language model.
arXiv Detail & Related papers (2023-03-02T22:36:19Z) - mFACE: Multilingual Summarization with Factual Consistency Evaluation [79.60172087719356]
Abstractive summarization has enjoyed renewed interest in recent years, thanks to pre-trained language models and the availability of large-scale datasets.
Despite promising results, current models still suffer from generating factually inconsistent summaries.
We leverage factual consistency evaluation models to improve multilingual summarization.
arXiv Detail & Related papers (2022-12-20T19:52:41Z) - Improving Visual Grounding by Encouraging Consistent Gradient-based
Explanations [58.442103936918805]
We show that Attention Mask Consistency produces superior visual grounding results than previous methods.
AMC is effective, easy to implement, and is general as it can be adopted by any vision-language model.
arXiv Detail & Related papers (2022-06-30T17:55:12Z) - Scaling Language Models: Methods, Analysis & Insights from Training
Gopher [83.98181046650664]
We present an analysis of Transformer-based language model performance across a wide range of model scales.
Gains from scale are largest in areas such as reading comprehension, fact-checking, and the identification of toxic language.
We discuss the application of language models to AI safety and the mitigation of downstream harms.
arXiv Detail & Related papers (2021-12-08T19:41:47Z) - FPM: A Collection of Large-scale Foundation Pre-trained Language Models [0.0]
We use the current effective model structure to launch a model set through the current most mainstream technology.
We think this will become the basic model in the future.
arXiv Detail & Related papers (2021-11-09T02:17:15Z) - Unsupervised Domain Adaptation of a Pretrained Cross-Lingual Language
Model [58.27176041092891]
Recent research indicates that pretraining cross-lingual language models on large-scale unlabeled texts yields significant performance improvements.
We propose a novel unsupervised feature decomposition method that can automatically extract domain-specific features from the entangled pretrained cross-lingual representations.
Our proposed model leverages mutual information estimation to decompose the representations computed by a cross-lingual model into domain-invariant and domain-specific parts.
arXiv Detail & Related papers (2020-11-23T16:00:42Z) - The Utility of General Domain Transfer Learning for Medical Language
Tasks [1.5459429010135775]
The purpose of this study is to analyze the efficacy of transfer learning techniques and transformer-based models as applied to medical natural language processing (NLP) tasks.
General text transfer learning may be a viable technique to generate state-of-the-art results within medical NLP tasks on radiological corpora.
arXiv Detail & Related papers (2020-02-16T20:20:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.