Reading Radiology Imaging Like The Radiologist
- URL: http://arxiv.org/abs/2307.05921v3
- Date: Thu, 20 Jul 2023 08:14:17 GMT
- Title: Reading Radiology Imaging Like The Radiologist
- Authors: Yuhao Wang
- Abstract summary: We design a factual consistency captioning generator to generate more accurate and factually consistent disease descriptions.
Our framework can find most similar reports for a given disease from the CXR database by retrieving a disease-oriented mask.
- Score: 3.218449686637963
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automated radiology report generation aims to generate radiology reports that
contain rich, fine-grained descriptions of radiology imaging. Compared with
image captioning in the natural image domain, medical images are very similar
to each other, with only minor differences in the occurrence of diseases. Given
the importance of these minor differences in the radiology report, it is
crucial to encourage the model to focus more on the subtle regions of disease
occurrence. Secondly, the problem of visual and textual data biases is serious.
Not only do normal cases make up the majority of the dataset, but sentences
describing areas with pathological changes also constitute only a small part of
the paragraph. Lastly, generating medical image reports involves the challenge
of long text generation, which requires more expertise and empirical training
in medical knowledge. As a result, the difficulty of generating such reports is
increased. To address these challenges, we propose a disease-oriented retrieval
framework that utilizes similar reports as prior knowledge references. We
design a factual consistency captioning generator to generate more accurate and
factually consistent disease descriptions. Our framework can find most similar
reports for a given disease from the CXR database by retrieving a
disease-oriented mask consisting of the position and morphological
characteristics. By referencing the disease-oriented similar report and the
visual features, the factual consistency model can generate a more accurate
radiology report.
Related papers
- Improving Factuality of 3D Brain MRI Report Generation with Paired Image-domain Retrieval and Text-domain Augmentation [42.13004422063442]
Acute ischemic stroke (AIS) requires time-critical management, with hours of delayed intervention leading to an irreversible disability of the patient.
Since diffusion weighted imaging (DWI) using the magnetic resonance image (MRI) plays a crucial role in the detection of AIS, automated prediction of AIS from DWI has been a research topic of clinical importance.
While text radiology reports contain the most relevant clinical information from the image findings, the difficulty of mapping across different modalities has limited the factuality of conventional direct DWI-to-report generation methods.
arXiv Detail & Related papers (2024-11-23T08:18:55Z) - Beyond Images: An Integrative Multi-modal Approach to Chest X-Ray Report
Generation [47.250147322130545]
Image-to-text radiology report generation aims to automatically produce radiology reports that describe the findings in medical images.
Most existing methods focus solely on the image data, disregarding the other patient information accessible to radiologists.
We present a novel multi-modal deep neural network framework for generating chest X-rays reports by integrating structured patient data, such as vital signs and symptoms, alongside unstructured clinical notes.
arXiv Detail & Related papers (2023-11-18T14:37:53Z) - Medical Image Captioning via Generative Pretrained Transformers [57.308920993032274]
We combine two language models, the Show-Attend-Tell and the GPT-3, to generate comprehensive and descriptive radiology records.
The proposed model is tested on two medical datasets, the Open-I, MIMIC-CXR, and the general-purpose MS-COCO.
arXiv Detail & Related papers (2022-09-28T10:27:10Z) - A Self-Guided Framework for Radiology Report Generation [10.573538773141715]
A self-guided framework (SGF) is developed to generate medical reports with annotated disease labels.
SGF uses unsupervised and supervised deep learning methods to mimic the process of human learning and writing.
Our results highlight the capacity of the proposed framework to distinguish fined-grained visual details between words.
arXiv Detail & Related papers (2022-06-19T11:09:27Z) - Generative Residual Attention Network for Disease Detection [51.60842580044539]
We present a novel approach for disease generation in X-rays using a conditional generative adversarial learning.
We generate a corresponding radiology image in a target domain while preserving the identity of the patient.
We then use the generated X-ray image in the target domain to augment our training to improve the detection performance.
arXiv Detail & Related papers (2021-10-25T14:15:57Z) - Variational Topic Inference for Chest X-Ray Report Generation [102.04931207504173]
Report generation for medical imaging promises to reduce workload and assist diagnosis in clinical practice.
Recent work has shown that deep learning models can successfully caption natural images.
We propose variational topic inference for automatic report generation.
arXiv Detail & Related papers (2021-07-15T13:34:38Z) - Unifying Relational Sentence Generation and Retrieval for Medical Image
Report Composition [142.42920413017163]
Current methods often generate the most common sentences due to dataset bias for individual case.
We propose a novel framework that unifies template retrieval and sentence generation to handle both common and rare abnormality.
arXiv Detail & Related papers (2021-01-09T04:33:27Z) - Auxiliary Signal-Guided Knowledge Encoder-Decoder for Medical Report
Generation [107.3538598876467]
We propose an Auxiliary Signal-Guided Knowledge-Decoder (ASGK) to mimic radiologists' working patterns.
ASGK integrates internal visual feature fusion and external medical linguistic information to guide medical knowledge transfer and learning.
arXiv Detail & Related papers (2020-06-06T01:00:15Z) - Show, Describe and Conclude: On Exploiting the Structure Information of
Chest X-Ray Reports [5.6070625920019825]
Chest X-Ray (CXR) images are commonly used for clinical screening and diagnosis.
The complex structures between and within sections of the reports pose a great challenge to the automatic report generation.
We propose a novel framework that exploits the structure information between and within report sections for generating CXR imaging reports.
arXiv Detail & Related papers (2020-04-26T02:29:20Z) - When Radiology Report Generation Meets Knowledge Graph [17.59749125131158]
The accuracy of positive disease keyword mentions is critical in radiology image reporting.
The evaluation of reporting quality should focus more on matching the disease keywords and their associated attributes.
We propose a new evaluation metric for radiology image reporting with the assistance of the same composed graph.
arXiv Detail & Related papers (2020-02-19T16:39:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.