Joint Modeling of Chest Radiographs and Radiology Reports for Pulmonary
Edema Assessment
- URL: http://arxiv.org/abs/2008.09884v1
- Date: Sat, 22 Aug 2020 17:28:39 GMT
- Title: Joint Modeling of Chest Radiographs and Radiology Reports for Pulmonary
Edema Assessment
- Authors: Geeticka Chauhan, Ruizhi Liao, William Wells, Jacob Andreas, Xin Wang,
Seth Berkowitz, Steven Horng, Peter Szolovits, Polina Golland
- Abstract summary: We develop a neural network model that is trained on both images and free-text to assess pulmonary edema severity from chest radiographs at inference time.
Our experimental results suggest that the joint image-text representation learning improves the performance of pulmonary edema assessment.
- Score: 39.60171837961607
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose and demonstrate a novel machine learning algorithm that assesses
pulmonary edema severity from chest radiographs. While large publicly available
datasets of chest radiographs and free-text radiology reports exist, only
limited numerical edema severity labels can be extracted from radiology
reports. This is a significant challenge in learning such models for image
classification. To take advantage of the rich information present in the
radiology reports, we develop a neural network model that is trained on both
images and free-text to assess pulmonary edema severity from chest radiographs
at inference time. Our experimental results suggest that the joint image-text
representation learning improves the performance of pulmonary edema assessment
compared to a supervised model trained on images only. We also show the use of
the text for explaining the image classification by the joint model. To the
best of our knowledge, our approach is the first to leverage free-text
radiology reports for improving the image model performance in this
application. Our code is available at
https://github.com/RayRuizhiLiao/joint_chestxray.
Related papers
- Beyond Images: An Integrative Multi-modal Approach to Chest X-Ray Report
Generation [47.250147322130545]
Image-to-text radiology report generation aims to automatically produce radiology reports that describe the findings in medical images.
Most existing methods focus solely on the image data, disregarding the other patient information accessible to radiologists.
We present a novel multi-modal deep neural network framework for generating chest X-rays reports by integrating structured patient data, such as vital signs and symptoms, alongside unstructured clinical notes.
arXiv Detail & Related papers (2023-11-18T14:37:53Z) - Long-Tailed Classification of Thorax Diseases on Chest X-Ray: A New
Benchmark Study [75.05049024176584]
We present a benchmark study of the long-tailed learning problem in the specific domain of thorax diseases on chest X-rays.
We focus on learning from naturally distributed chest X-ray data, optimizing classification accuracy over not only the common "head" classes, but also the rare yet critical "tail" classes.
The benchmark consists of two chest X-ray datasets for 19- and 20-way thorax disease classification, containing classes with as many as 53,000 and as few as 7 labeled training images.
arXiv Detail & Related papers (2022-08-29T04:34:15Z) - RadTex: Learning Efficient Radiograph Representations from Text Reports [7.090896766922791]
We build a data-efficient learning framework that utilizes radiology reports to improve medical image classification performance with limited labeled data.
Our model achieves higher classification performance than ImageNet-supervised pretraining when labeled training data is limited.
arXiv Detail & Related papers (2022-08-05T15:06:26Z) - Using Multi-modal Data for Improving Generalizability and Explainability
of Disease Classification in Radiology [0.0]
Traditional datasets for the radiological diagnosis tend to only provide the radiology image alongside the radiology report.
This paper utilizes the recently published Eye-Gaze dataset to perform an exhaustive study on the impact on performance and explainability of deep learning (DL) classification.
We find that the best classification performance of X-ray images is achieved with a combination of radiology report free-text and radiology image, with the eye-gaze data providing no performance boost.
arXiv Detail & Related papers (2022-07-29T16:49:05Z) - Preservation of High Frequency Content for Deep Learning-Based Medical
Image Classification [74.84221280249876]
An efficient analysis of large amounts of chest radiographs can aid physicians and radiologists.
We propose a novel Discrete Wavelet Transform (DWT)-based method for the efficient identification and encoding of visual information.
arXiv Detail & Related papers (2022-05-08T15:29:54Z) - Debiasing pipeline improves deep learning model generalization for X-ray
based lung nodule detection [11.228544549618068]
Lung cancer is the leading cause of cancer death worldwide and a good prognosis depends on early diagnosis.
We show that an image pre-processing pipeline that homogenizes and debiases chest X-ray images can improve both internal classification and external generalization.
An evolutionary pruning mechanism is used to train a nodule detection deep learning model on the most informative images from a publicly available lung nodule X-ray dataset.
arXiv Detail & Related papers (2022-01-24T10:08:07Z) - Generative Residual Attention Network for Disease Detection [51.60842580044539]
We present a novel approach for disease generation in X-rays using a conditional generative adversarial learning.
We generate a corresponding radiology image in a target domain while preserving the identity of the patient.
We then use the generated X-ray image in the target domain to augment our training to improve the detection performance.
arXiv Detail & Related papers (2021-10-25T14:15:57Z) - Learning Invariant Feature Representation to Improve Generalization
across Chest X-ray Datasets [55.06983249986729]
We show that a deep learning model performing well when tested on the same dataset as training data starts to perform poorly when it is tested on a dataset from a different source.
By employing an adversarial training strategy, we show that a network can be forced to learn a source-invariant representation.
arXiv Detail & Related papers (2020-08-04T07:41:15Z) - Evaluation of Contemporary Convolutional Neural Network Architectures
for Detecting COVID-19 from Chest Radiographs [0.0]
We train and evaluate three model architectures, proposed for chest radiograph analysis, under varying conditions.
We find issues that discount the impressive model performances proposed by contemporary studies on this subject.
arXiv Detail & Related papers (2020-06-30T15:22:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.