Automated Chest X-Ray Report Generator Using Multi-Model Deep Learning
Approach
- URL: http://arxiv.org/abs/2310.05969v3
- Date: Sat, 27 Jan 2024 12:51:05 GMT
- Title: Automated Chest X-Ray Report Generator Using Multi-Model Deep Learning
Approach
- Authors: Arief Purnama Muharram, Hollyana Puteri Haryono, Abassi Haji Juma, Ira
Puspasari and Nugraha Priya Utama
- Abstract summary: The system generates a radiology report by performing the following three steps: image pre-processing, utilizing deep learning models to detect abnormalities, and producing a report.
The proposed system is expected to reduce the workload of radiologists and increase the accuracy of chest X-ray diagnosis.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Reading and interpreting chest X-ray images is one of the most radiologist's
routines. However, it still can be challenging, even for the most experienced
ones. Therefore, we proposed a multi-model deep learning-based automated chest
X-ray report generator system designed to assist radiologists in their work.
The basic idea of the proposed system is by utilizing multi
binary-classification models for detecting multi abnormalities, with each model
responsible for detecting one abnormality, in a single image. In this study, we
limited the radiology abnormalities detection to only cardiomegaly, lung
effusion, and consolidation. The system generates a radiology report by
performing the following three steps: image pre-processing, utilizing deep
learning models to detect abnormalities, and producing a report. The aim of the
image pre-processing step is to standardize the input by scaling it to 128x128
pixels and slicing it into three segments, which covers the upper, lower, and
middle parts of the lung. After pre-processing, each corresponding model
classifies the image, resulting in a 0 (zero) for no abnormality detected and a
1 (one) for the presence of an abnormality. The prediction outputs of each
model are then concatenated to form a 'result code'. The 'result code' is used
to construct a report by selecting the appropriate pre-determined sentence for
each detected abnormality in the report generation step. The proposed system is
expected to reduce the workload of radiologists and increase the accuracy of
chest X-ray diagnosis.
Related papers
- Replace and Report: NLP Assisted Radiology Report Generation [31.309987297324845]
We propose a template-based approach to generate radiology reports from radiographs.
This is the first attempt to generate chest X-ray radiology reports by first creating small sentences for abnormal findings and then replacing them in the normal report template.
arXiv Detail & Related papers (2023-06-19T10:04:42Z) - Diffusion Models for Medical Anomaly Detection [0.8999666725996974]
We present a novel weakly supervised anomaly detection method based on denoising diffusion implicit models.
Our method generates very detailed anomaly maps without the need for a complex training procedure.
arXiv Detail & Related papers (2022-03-08T12:35:07Z) - Debiasing pipeline improves deep learning model generalization for X-ray
based lung nodule detection [11.228544549618068]
Lung cancer is the leading cause of cancer death worldwide and a good prognosis depends on early diagnosis.
We show that an image pre-processing pipeline that homogenizes and debiases chest X-ray images can improve both internal classification and external generalization.
An evolutionary pruning mechanism is used to train a nodule detection deep learning model on the most informative images from a publicly available lung nodule X-ray dataset.
arXiv Detail & Related papers (2022-01-24T10:08:07Z) - Explainable multiple abnormality classification of chest CT volumes with
AxialNet and HiResCAM [89.2175350956813]
We introduce the challenging new task of explainable multiple abnormality classification in volumetric medical images.
We propose a multiple instance learning convolutional neural network, AxialNet, that allows identification of top slices for each abnormality.
We then aim to improve the model's learning through a novel mask loss that leverages HiResCAM and 3D allowed regions.
arXiv Detail & Related papers (2021-11-24T01:14:33Z) - Generative Residual Attention Network for Disease Detection [51.60842580044539]
We present a novel approach for disease generation in X-rays using a conditional generative adversarial learning.
We generate a corresponding radiology image in a target domain while preserving the identity of the patient.
We then use the generated X-ray image in the target domain to augment our training to improve the detection performance.
arXiv Detail & Related papers (2021-10-25T14:15:57Z) - Contrastive Attention for Automatic Chest X-ray Report Generation [124.60087367316531]
In most cases, the normal regions dominate the entire chest X-ray image, and the corresponding descriptions of these normal regions dominate the final report.
We propose Contrastive Attention (CA) model, which compares the current input image with normal images to distill the contrastive information.
We achieve the state-of-the-art results on the two public datasets.
arXiv Detail & Related papers (2021-06-13T11:20:31Z) - Computer-aided abnormality detection in chest radiographs in a clinical
setting via domain-adaptation [0.23624125155742057]
Deep learning (DL) models are being deployed at medical centers to aid radiologists for diagnosis of lung conditions from chest radiographs.
These pre-trained DL models' ability to generalize in clinical settings is poor because of the changes in data distributions between publicly available and privately held radiographs.
In this work, we introduce a domain-shift detection and removal method to overcome this problem.
arXiv Detail & Related papers (2020-12-19T01:01:48Z) - XraySyn: Realistic View Synthesis From a Single Radiograph Through CT
Priors [118.27130593216096]
A radiograph visualizes the internal anatomy of a patient through the use of X-ray, which projects 3D information onto a 2D plane.
To the best of our knowledge, this is the first work on radiograph view synthesis.
We show that by gaining an understanding of radiography in 3D space, our method can be applied to radiograph bone extraction and suppression without groundtruth bone labels.
arXiv Detail & Related papers (2020-12-04T05:08:53Z) - Convolutional-LSTM for Multi-Image to Single Output Medical Prediction [55.41644538483948]
A common scenario in developing countries is to have the volume metadata lost due multiple reasons.
It is possible to get a multi-image to single diagnostic model which mimics human doctor diagnostic process.
arXiv Detail & Related papers (2020-10-20T04:30:09Z) - Learning Invariant Feature Representation to Improve Generalization
across Chest X-ray Datasets [55.06983249986729]
We show that a deep learning model performing well when tested on the same dataset as training data starts to perform poorly when it is tested on a dataset from a different source.
By employing an adversarial training strategy, we show that a network can be forced to learn a source-invariant representation.
arXiv Detail & Related papers (2020-08-04T07:41:15Z) - XRayGAN: Consistency-preserving Generation of X-ray Images from
Radiology Reports [19.360283053558604]
We develop methods to generate view-consistent, high-fidelity, and high-resolution X-ray images from radiology reports.
This work represents the first one generating consistent and high-resolution X-ray images from radiology reports.
arXiv Detail & Related papers (2020-06-17T05:32:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.