Chest X-ray Report Generation through Fine-Grained Label Learning
- URL: http://arxiv.org/abs/2007.13831v1
- Date: Mon, 27 Jul 2020 19:50:56 GMT
- Title: Chest X-ray Report Generation through Fine-Grained Label Learning
- Authors: Tanveer Syeda-Mahmood, Ken C. L. Wong, Yaniv Gur, Joy T. Wu, Ashutosh
Jadhav, Satyananda Kashyap, Alexandros Karargyris, Anup Pillai, Arjun Sharma,
Ali Bin Syed, Orest Boyko, Mehdi Moradi
- Abstract summary: We present a domain-aware automatic chest X-ray radiology report generation algorithm that learns fine-grained description of findings from images.
We also develop an automatic labeling algorithm for assigning such descriptors to images and build a novel deep learning network that recognizes both coarse and fine-grained descriptions of findings.
- Score: 46.352966049776875
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Obtaining automated preliminary read reports for common exams such as chest
X-rays will expedite clinical workflows and improve operational efficiencies in
hospitals. However, the quality of reports generated by current automated
approaches is not yet clinically acceptable as they cannot ensure the correct
detection of a broad spectrum of radiographic findings nor describe them
accurately in terms of laterality, anatomical location, severity, etc. In this
work, we present a domain-aware automatic chest X-ray radiology report
generation algorithm that learns fine-grained description of findings from
images and uses their pattern of occurrences to retrieve and customize similar
reports from a large report database. We also develop an automatic labeling
algorithm for assigning such descriptors to images and build a novel deep
learning network that recognizes both coarse and fine-grained descriptions of
findings. The resulting report generation algorithm significantly outperforms
the state of the art using established score metrics.
Related papers
- Structural Entities Extraction and Patient Indications Incorporation for Chest X-ray Report Generation [10.46031380503486]
We introduce a novel method, textbfStructural textbfEntities extraction and patient indications textbfIncorporation (SEI) for chest X-ray report generation.
We employ a structural entities extraction (SEE) approach to eliminate presentation-style vocabulary in reports.
We propose a cross-modal fusion network to integrate information from X-ray images, similar historical cases, and patient-specific indications.
arXiv Detail & Related papers (2024-05-23T01:29:47Z) - Finding-Aware Anatomical Tokens for Chest X-Ray Automated Reporting [13.151444796296868]
We introduce a novel adaptation of Faster R-CNN in which finding detection is performed for the candidate bounding boxes extracted during anatomical structure localisation.
We use the resulting bounding box feature representations as our set of finding-aware anatomical tokens.
We show that task-aware anatomical tokens give state-of-the-art performance when integrated into an automated reporting pipeline.
arXiv Detail & Related papers (2023-08-30T11:35:21Z) - Fact-Checking of AI-Generated Reports [10.458946019567891]
We propose a new method of fact-checking of AI-generated reports using their associated images.
Specifically, the developed examiner differentiates real and fake sentences in reports by learning the association between an image and sentences describing real or potentially fake findings.
arXiv Detail & Related papers (2023-07-27T05:49:24Z) - Attributed Abnormality Graph Embedding for Clinically Accurate X-Ray
Report Generation [7.118069629513661]
We introduce a novel fined-grained knowledge graph structure called an attributed abnormality graph (ATAG)
The ATAG consists of interconnected abnormality nodes and attribute nodes, allowing it to better capture the abnormality details.
We show that the proposed ATAG-based deep model outperforms the SOTA methods by a large margin and can improve the clinical accuracy of the generated reports.
arXiv Detail & Related papers (2022-07-04T05:32:00Z) - Breaking with Fixed Set Pathology Recognition through Report-Guided
Contrastive Training [23.506879497561712]
We employ a contrastive global-local dual-encoder architecture to learn concepts directly from unstructured medical reports.
We evaluate our approach on the large-scale chest X-Ray datasets MIMIC-CXR, CheXpert, and ChestX-Ray14 for disease classification.
arXiv Detail & Related papers (2022-05-14T21:44:05Z) - Preservation of High Frequency Content for Deep Learning-Based Medical
Image Classification [74.84221280249876]
An efficient analysis of large amounts of chest radiographs can aid physicians and radiologists.
We propose a novel Discrete Wavelet Transform (DWT)-based method for the efficient identification and encoding of visual information.
arXiv Detail & Related papers (2022-05-08T15:29:54Z) - Generative Residual Attention Network for Disease Detection [51.60842580044539]
We present a novel approach for disease generation in X-rays using a conditional generative adversarial learning.
We generate a corresponding radiology image in a target domain while preserving the identity of the patient.
We then use the generated X-ray image in the target domain to augment our training to improve the detection performance.
arXiv Detail & Related papers (2021-10-25T14:15:57Z) - Learning Invariant Feature Representation to Improve Generalization
across Chest X-ray Datasets [55.06983249986729]
We show that a deep learning model performing well when tested on the same dataset as training data starts to perform poorly when it is tested on a dataset from a different source.
By employing an adversarial training strategy, we show that a network can be forced to learn a source-invariant representation.
arXiv Detail & Related papers (2020-08-04T07:41:15Z) - Auxiliary Signal-Guided Knowledge Encoder-Decoder for Medical Report
Generation [107.3538598876467]
We propose an Auxiliary Signal-Guided Knowledge-Decoder (ASGK) to mimic radiologists' working patterns.
ASGK integrates internal visual feature fusion and external medical linguistic information to guide medical knowledge transfer and learning.
arXiv Detail & Related papers (2020-06-06T01:00:15Z) - Y-Net for Chest X-Ray Preprocessing: Simultaneous Classification of
Geometry and Segmentation of Annotations [70.0118756144807]
This work introduces a general pre-processing step for chest x-ray input into machine learning algorithms.
A modified Y-Net architecture based on the VGG11 encoder is used to simultaneously learn geometric orientation and segmentation of radiographs.
Results were evaluated by expert clinicians, with acceptable geometry in 95.8% and annotation mask in 96.2%, compared to 27.0% and 34.9% respectively in control images.
arXiv Detail & Related papers (2020-05-08T02:16:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.