Automated Knee X-ray Report Generation
- URL: http://arxiv.org/abs/2105.10702v1
- Date: Sat, 22 May 2021 11:59:42 GMT
- Title: Automated Knee X-ray Report Generation
- Authors: Aydan Gasimova, Giovanni Montana, Daniel Rueckert
- Abstract summary: We propose to take advantage of past radiological exams and formulate a framework capable of learning the correspondence between the images and reports.
We demonstrate how aggregating the image features of individual exams and using them as conditional inputs when training a language generation model results in auto-generated exam reports.
- Score: 12.732469371097347
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Gathering manually annotated images for the purpose of training a predictive
model is far more challenging in the medical domain than for natural images as
it requires the expertise of qualified radiologists. We therefore propose to
take advantage of past radiological exams (specifically, knee X-ray
examinations) and formulate a framework capable of learning the correspondence
between the images and reports, and hence be capable of generating diagnostic
reports for a given X-ray examination consisting of an arbitrary number of
image views. We demonstrate how aggregating the image features of individual
exams and using them as conditional inputs when training a language generation
model results in auto-generated exam reports that correlate well with
radiologist-generated reports.
Related papers
- Designing a Robust Radiology Report Generation System [1.0878040851637998]
This paper outlines the design of a robust radiology report generation system by integrating different modules and highlighting best practices.
We believe that these best practices could improve automatic radiology report generation, augment radiologists in decision making, and expedite diagnostic workflow.
arXiv Detail & Related papers (2024-11-02T06:38:04Z) - Structural Entities Extraction and Patient Indications Incorporation for Chest X-ray Report Generation [10.46031380503486]
We introduce a novel method, textbfStructural textbfEntities extraction and patient indications textbfIncorporation (SEI) for chest X-ray report generation.
We employ a structural entities extraction (SEE) approach to eliminate presentation-style vocabulary in reports.
We propose a cross-modal fusion network to integrate information from X-ray images, similar historical cases, and patient-specific indications.
arXiv Detail & Related papers (2024-05-23T01:29:47Z) - Mining Gaze for Contrastive Learning toward Computer-Assisted Diagnosis [61.089776864520594]
We propose eye-tracking as an alternative to text reports for medical images.
By tracking the gaze of radiologists as they read and diagnose medical images, we can understand their visual attention and clinical reasoning.
We introduce the Medical contrastive Gaze Image Pre-training (McGIP) as a plug-and-play module for contrastive learning frameworks.
arXiv Detail & Related papers (2023-12-11T02:27:45Z) - Radiology Report Generation Using Transformers Conditioned with
Non-imaging Data [55.17268696112258]
This paper proposes a novel multi-modal transformer network that integrates chest x-ray (CXR) images and associated patient demographic information.
The proposed network uses a convolutional neural network to extract visual features from CXRs and a transformer-based encoder-decoder network that combines the visual features with semantic text embeddings of patient demographic information.
arXiv Detail & Related papers (2023-11-18T14:52:26Z) - Beyond Images: An Integrative Multi-modal Approach to Chest X-Ray Report
Generation [47.250147322130545]
Image-to-text radiology report generation aims to automatically produce radiology reports that describe the findings in medical images.
Most existing methods focus solely on the image data, disregarding the other patient information accessible to radiologists.
We present a novel multi-modal deep neural network framework for generating chest X-rays reports by integrating structured patient data, such as vital signs and symptoms, alongside unstructured clinical notes.
arXiv Detail & Related papers (2023-11-18T14:37:53Z) - Generation of Radiology Findings in Chest X-Ray by Leveraging
Collaborative Knowledge [6.792487817626456]
The cognitive task of interpreting medical images remains the most critical and often time-consuming step in the radiology workflow.
This work focuses on reducing the workload of radiologists who spend most of their time either writing or narrating the Findings.
Unlike past research, which addresses radiology report generation as a single-step image captioning task, we have further taken into consideration the complexity of interpreting CXR images.
arXiv Detail & Related papers (2023-06-18T00:51:28Z) - Cyclic Generative Adversarial Networks With Congruent Image-Report
Generation For Explainable Medical Image Analysis [5.6512908295414]
We present a novel framework for explainable labeling and interpretation of medical images.
The aim of the work is to generate trustworthy and faithful explanations for the outputs of a model diagnosing chest x-ray images.
arXiv Detail & Related papers (2022-11-16T12:41:21Z) - Medical Image Captioning via Generative Pretrained Transformers [57.308920993032274]
We combine two language models, the Show-Attend-Tell and the GPT-3, to generate comprehensive and descriptive radiology records.
The proposed model is tested on two medical datasets, the Open-I, MIMIC-CXR, and the general-purpose MS-COCO.
arXiv Detail & Related papers (2022-09-28T10:27:10Z) - Generative Residual Attention Network for Disease Detection [51.60842580044539]
We present a novel approach for disease generation in X-rays using a conditional generative adversarial learning.
We generate a corresponding radiology image in a target domain while preserving the identity of the patient.
We then use the generated X-ray image in the target domain to augment our training to improve the detection performance.
arXiv Detail & Related papers (2021-10-25T14:15:57Z) - Variational Topic Inference for Chest X-Ray Report Generation [102.04931207504173]
Report generation for medical imaging promises to reduce workload and assist diagnosis in clinical practice.
Recent work has shown that deep learning models can successfully caption natural images.
We propose variational topic inference for automatic report generation.
arXiv Detail & Related papers (2021-07-15T13:34:38Z) - When Radiology Report Generation Meets Knowledge Graph [17.59749125131158]
The accuracy of positive disease keyword mentions is critical in radiology image reporting.
The evaluation of reporting quality should focus more on matching the disease keywords and their associated attributes.
We propose a new evaluation metric for radiology image reporting with the assistance of the same composed graph.
arXiv Detail & Related papers (2020-02-19T16:39:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.