Boosting Radiology Report Generation by Infusing Comparison Prior
- URL: http://arxiv.org/abs/2305.04561v2
- Date: Mon, 5 Jun 2023 10:28:11 GMT
- Title: Boosting Radiology Report Generation by Infusing Comparison Prior
- Authors: Sanghwan Kim, Farhad Nooralahzadeh, Morteza Rohanian, Koji Fujimoto,
Mizuho Nishio, Ryo Sakamoto, Fabio Rinaldi, and Michael Krauthammer
- Abstract summary: Recent transformer-based models have made significant strides in generating radiology reports from chest X-ray images.
These models often lack prior knowledge, resulting in the generation of synthetic reports that mistakenly reference non-existent prior exams.
We propose a novel approach that leverages a rule-based labeler to extract comparison prior information from radiology reports.
- Score: 7.054671146863795
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Recent transformer-based models have made significant strides in generating
radiology reports from chest X-ray images. However, a prominent challenge
remains: these models often lack prior knowledge, resulting in the generation
of synthetic reports that mistakenly reference non-existent prior exams. This
discrepancy can be attributed to a knowledge gap between radiologists and the
generation models. While radiologists possess patient-specific prior
information, the models solely receive X-ray images at a specific time point.
To tackle this issue, we propose a novel approach that leverages a rule-based
labeler to extract comparison prior information from radiology reports. This
extracted comparison prior is then seamlessly integrated into state-of-the-art
transformer-based models, enabling them to produce more realistic and
comprehensive reports. Our method is evaluated on English report datasets, such
as IU X-ray and MIMIC-CXR. The results demonstrate that our approach surpasses
baseline models in terms of natural language generation metrics. Notably, our
model generates reports that are free from false references to non-existent
prior exams, setting it apart from previous models. By addressing this
limitation, our approach represents a significant step towards bridging the gap
between radiologists and generation models in the domain of medical report
generation.
Related papers
- RaTEScore: A Metric for Radiology Report Generation [59.37561810438641]
This paper introduces a novel, entity-aware metric, as Radiological Report (Text) Evaluation (RaTEScore)
RaTEScore emphasizes crucial medical entities such as diagnostic outcomes and anatomical details, and is robust against complex medical synonyms and sensitive to negation expressions.
Our evaluations demonstrate that RaTEScore aligns more closely with human preference than existing metrics, validated both on established public benchmarks and our newly proposed RaTE-Eval benchmark.
arXiv Detail & Related papers (2024-06-24T17:49:28Z) - Beyond Images: An Integrative Multi-modal Approach to Chest X-Ray Report
Generation [47.250147322130545]
Image-to-text radiology report generation aims to automatically produce radiology reports that describe the findings in medical images.
Most existing methods focus solely on the image data, disregarding the other patient information accessible to radiologists.
We present a novel multi-modal deep neural network framework for generating chest X-rays reports by integrating structured patient data, such as vital signs and symptoms, alongside unstructured clinical notes.
arXiv Detail & Related papers (2023-11-18T14:37:53Z) - ChatRadio-Valuer: A Chat Large Language Model for Generalizable
Radiology Report Generation Based on Multi-institution and Multi-system Data [115.0747462486285]
ChatRadio-Valuer is a tailored model for automatic radiology report generation that learns generalizable representations.
The clinical dataset utilized in this study encompasses a remarkable total of textbf332,673 observations.
ChatRadio-Valuer consistently outperforms state-of-the-art models, especially ChatGPT (GPT-3.5-Turbo) and GPT-4 et al.
arXiv Detail & Related papers (2023-10-08T17:23:17Z) - Longitudinal Data and a Semantic Similarity Reward for Chest X-Ray Report Generation [7.586632627817609]
Radiologists face high burnout rates, partly due to the increasing volume of Chest X-rays (CXRs) requiring interpretation and reporting.
Our proposed CXR report generator integrates elements of the workflow and introduces a novel reward for reinforcement learning.
Results from our study demonstrate that the proposed model generates reports that are more aligned with radiologists' reports than state-of-the-art models.
arXiv Detail & Related papers (2023-07-19T05:41:14Z) - Replace and Report: NLP Assisted Radiology Report Generation [31.309987297324845]
We propose a template-based approach to generate radiology reports from radiographs.
This is the first attempt to generate chest X-ray radiology reports by first creating small sentences for abnormal findings and then replacing them in the normal report template.
arXiv Detail & Related papers (2023-06-19T10:04:42Z) - Medical Image Captioning via Generative Pretrained Transformers [57.308920993032274]
We combine two language models, the Show-Attend-Tell and the GPT-3, to generate comprehensive and descriptive radiology records.
The proposed model is tested on two medical datasets, the Open-I, MIMIC-CXR, and the general-purpose MS-COCO.
arXiv Detail & Related papers (2022-09-28T10:27:10Z) - Improving Radiology Report Generation Systems by Removing Hallucinated
References to Non-existent Priors [1.1110995501996481]
We propose two methods to remove references to priors in radiology reports.
A GPT-3-based few-shot approach to rewrite medical reports without references to priors; and a BioBERT-based token classification approach to directly remove words referring to priors.
We find that our re-trained model--which we call CXR-ReDonE--outperforms previous report generation methods on clinical metrics, achieving an average BERTScore of 0.2351 (2.57% absolute improvement)
arXiv Detail & Related papers (2022-09-27T00:44:41Z) - Contrastive Attention for Automatic Chest X-ray Report Generation [124.60087367316531]
In most cases, the normal regions dominate the entire chest X-ray image, and the corresponding descriptions of these normal regions dominate the final report.
We propose Contrastive Attention (CA) model, which compares the current input image with normal images to distill the contrastive information.
We achieve the state-of-the-art results on the two public datasets.
arXiv Detail & Related papers (2021-06-13T11:20:31Z) - Exploring and Distilling Posterior and Prior Knowledge for Radiology
Report Generation [55.00308939833555]
The PPKED includes three modules: Posterior Knowledge Explorer (PoKE), Prior Knowledge Explorer (PrKE) and Multi-domain Knowledge Distiller (MKD)
PoKE explores the posterior knowledge, which provides explicit abnormal visual regions to alleviate visual data bias.
PrKE explores the prior knowledge from the prior medical knowledge graph (medical knowledge) and prior radiology reports (working experience) to alleviate textual data bias.
arXiv Detail & Related papers (2021-06-13T11:10:02Z) - Learning Visual-Semantic Embeddings for Reporting Abnormal Findings on
Chest X-rays [6.686095511538683]
This work focuses on reporting abnormal findings on radiology images.
We propose a method to identify abnormal findings from the reports in addition to grouping them with unsupervised clustering and minimal rules.
We demonstrate that our method is able to retrieve abnormal findings and outperforms existing generation models on both clinical correctness and text generation metrics.
arXiv Detail & Related papers (2020-10-06T04:18:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.