Toward expanding the scope of radiology report summarization to multiple
anatomies and modalities
- URL: http://arxiv.org/abs/2211.08584v3
- Date: Fri, 21 Jul 2023 22:08:45 GMT
- Title: Toward expanding the scope of radiology report summarization to multiple
anatomies and modalities
- Authors: Zhihong Chen, Maya Varma, Xiang Wan, Curtis Langlotz, Jean-Benoit
Delbrouck
- Abstract summary: We propose a dataset (MIMIC-RRS) involving three new modalities and seven new anatomies.
We then conduct extensive experiments to evaluate the performance of models both within and across modality-anatomy pairs in MIMIC-RRS.
- Score: 19.993305066149308
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Radiology report summarization (RRS) is a growing area of research. Given the
Findings section of a radiology report, the goal is to generate a summary
(called an Impression section) that highlights the key observations and
conclusions of the radiology study. However, RRS currently faces essential
limitations.First, many prior studies conduct experiments on private datasets,
preventing reproduction of results and fair comparisons across different
systems and solutions. Second, most prior approaches are evaluated solely on
chest X-rays. To address these limitations, we propose a dataset (MIMIC-RRS)
involving three new modalities and seven new anatomies based on the MIMIC-III
and MIMIC-CXR datasets. We then conduct extensive experiments to evaluate the
performance of models both within and across modality-anatomy pairs in
MIMIC-RRS. In addition, we evaluate their clinical efficacy via RadGraph, a
factual correctness metric.
Related papers
- LLM-RadJudge: Achieving Radiologist-Level Evaluation for X-Ray Report Generation [37.20505633019773]
evaluating generated radiology reports is crucial for the development of radiology AI.
This study proposes a novel evaluation framework using large language models (LLMs) to compare radiology reports for assessment.
arXiv Detail & Related papers (2024-04-01T09:02:12Z) - Large Model driven Radiology Report Generation with Clinical Quality
Reinforcement Learning [16.849933628738277]
Radiology report generation (RRG) has attracted significant attention due to its potential to reduce the workload of radiologists.
This paper introduces a novel RRG method, textbfLM-RRG, that integrates large models (LMs) with clinical quality reinforcement learning.
Experiments on the MIMIC-CXR and IU-Xray datasets demonstrate the superiority of our method over the state of the art.
arXiv Detail & Related papers (2024-03-11T13:47:11Z) - Radiology Report Generation Using Transformers Conditioned with
Non-imaging Data [55.17268696112258]
This paper proposes a novel multi-modal transformer network that integrates chest x-ray (CXR) images and associated patient demographic information.
The proposed network uses a convolutional neural network to extract visual features from CXRs and a transformer-based encoder-decoder network that combines the visual features with semantic text embeddings of patient demographic information.
arXiv Detail & Related papers (2023-11-18T14:52:26Z) - Beyond Images: An Integrative Multi-modal Approach to Chest X-Ray Report
Generation [47.250147322130545]
Image-to-text radiology report generation aims to automatically produce radiology reports that describe the findings in medical images.
Most existing methods focus solely on the image data, disregarding the other patient information accessible to radiologists.
We present a novel multi-modal deep neural network framework for generating chest X-rays reports by integrating structured patient data, such as vital signs and symptoms, alongside unstructured clinical notes.
arXiv Detail & Related papers (2023-11-18T14:37:53Z) - ChatRadio-Valuer: A Chat Large Language Model for Generalizable
Radiology Report Generation Based on Multi-institution and Multi-system Data [115.0747462486285]
ChatRadio-Valuer is a tailored model for automatic radiology report generation that learns generalizable representations.
The clinical dataset utilized in this study encompasses a remarkable total of textbf332,673 observations.
ChatRadio-Valuer consistently outperforms state-of-the-art models, especially ChatGPT (GPT-3.5-Turbo) and GPT-4 et al.
arXiv Detail & Related papers (2023-10-08T17:23:17Z) - Radiology-Llama2: Best-in-Class Large Language Model for Radiology [71.27700230067168]
This paper introduces Radiology-Llama2, a large language model specialized for radiology through a process known as instruction tuning.
Quantitative evaluations using ROUGE metrics on the MIMIC-CXR and OpenI datasets demonstrate that Radiology-Llama2 achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-08-29T17:44:28Z) - MDF-Net for abnormality detection by fusing X-rays with clinical data [14.347359031598813]
This study investigates the effects of including patients' clinical information on the performance of deep learning (DL) classifiers for disease location in chest X-rays.
We propose a novel architecture consisting of two fusion methods that enable the model to simultaneously process patients' clinical data and chest X-rays.
Results show that incorporating patients' clinical data in a DL model together with the proposed fusion methods improves the disease localization in chest X-rays by 12% in terms of Average Precision.
arXiv Detail & Related papers (2023-02-26T19:16:57Z) - Medical Image Captioning via Generative Pretrained Transformers [57.308920993032274]
We combine two language models, the Show-Attend-Tell and the GPT-3, to generate comprehensive and descriptive radiology records.
The proposed model is tested on two medical datasets, the Open-I, MIMIC-CXR, and the general-purpose MS-COCO.
arXiv Detail & Related papers (2022-09-28T10:27:10Z) - Event-based clinical findings extraction from radiology reports with
pre-trained language model [0.22940141855172028]
We present a new corpus of radiology reports annotated with clinical findings.
The gold standard corpus contained a total of 500 annotated computed tomography (CT) reports.
We extracted triggers and argument entities using two state-of-the-art deep learning architectures, including BERT.
arXiv Detail & Related papers (2021-12-27T05:03:10Z) - Exploring and Distilling Posterior and Prior Knowledge for Radiology
Report Generation [55.00308939833555]
The PPKED includes three modules: Posterior Knowledge Explorer (PoKE), Prior Knowledge Explorer (PrKE) and Multi-domain Knowledge Distiller (MKD)
PoKE explores the posterior knowledge, which provides explicit abnormal visual regions to alleviate visual data bias.
PrKE explores the prior knowledge from the prior medical knowledge graph (medical knowledge) and prior radiology reports (working experience) to alleviate textual data bias.
arXiv Detail & Related papers (2021-06-13T11:10:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.