U-Net-based Lung Thickness Map for Pixel-level Lung Volume Estimation of Chest X-rays
- URL: http://arxiv.org/abs/2110.12509v5
- Date: Wed, 31 Jul 2024 13:41:24 GMT
- Title: U-Net-based Lung Thickness Map for Pixel-level Lung Volume Estimation of Chest X-rays
- Authors: Tina Dorosti, Manuel Schultheiss, Philipp Schmette, Jule Heuchert, Johannes Thalhammer, Florian Schaff, Thorsten Sellerer, Rafael Schick, Kirsten Taphorn, Korbinian Mechlem, Lorenz Birnbacher, Franz Pfeiffer, Daniela Pfeiffer,
- Abstract summary: We aimed to estimate the total lung volume (TLV) from real and synthetic frontal X-ray radiographs on a pixel level using lung thickness maps generated UNet.
A U-Net model was trained and tested on synthetic radiographs from the public datasets to predict lung maps and consequently estimate TLV.
- Score: 4.595143640439819
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Purpose: We aimed to estimate the total lung volume (TLV) from real and synthetic frontal X-ray radiographs on a pixel level using lung thickness maps generated by a U-Net. Methods: 5,959 thorax X-ray computed tomography (CT) scans were retrieved from two publicly available datasets of the lung nodule analysis 2016 (n=656) and the RSNA pulmonary embolism detection challenge 2020 (n=5,303). Additionally, thorax CT scans from 72 subjects (33 healthy: 20 men, mean age [range] = 62.4 [34, 80]; 39 suffering from chronic obstructive pulmonary disease: 25 men, mean age [range] = 69.0 [47, 91]) were retrospectively selected (10.2018-12.2019) from our in-house dataset such that for each subject, a frontal chest X-ray radiograph no older than seven days was available. All CT scans and their corresponding lung segmentation were forward projected using a simulated X-ray spectrum to generate synthetic radiographs and lung thickness maps, respectively. A U-Net model was trained and tested on synthetic radiographs from the public datasets to predict lung thickness maps and consequently estimate TLV. Model performance was further assessed by evaluating the TLV estimations for the in-house synthetic and real radiograph pairs using Pearson correlation coefficient (r) and significance testing. Results: Strong correlations were measured between the predicted and CT-derived ground truth TLV values for test data from synthetic ($n_{Public}$=1,191, r=0.987, P < 0.001; $n_{In-house}$=72, r=0.973, P < 0.001) and real radiographs (n=72, r=0.908, P < 0.001). Conclusion: TLV from U-Net-generated pixel-level lung thickness maps were successfully estimated for synthetic and real radiographs.
Related papers
- BeyondCT: A deep learning model for predicting pulmonary function from chest CT scans [2.602923751641061]
The BeyondCT model was developed to predict forced vital capacity (FVC) and forced expiratory volume in one second (FEV1) from non-contrasted inspiratory chest CT scans.
The model showed robust performance in predicting lung function from non-contrast inspiratory chest CT scans.
arXiv Detail & Related papers (2024-08-10T22:28:02Z) - Attention-based Saliency Maps Improve Interpretability of Pneumothorax
Classification [52.77024349608834]
To investigate chest radiograph (CXR) classification performance of vision transformers (ViT) and interpretability of attention-based saliency.
ViTs were fine-tuned for lung disease classification using four public data sets: CheXpert, Chest X-Ray 14, MIMIC CXR, and VinBigData.
ViTs had comparable CXR classification AUCs compared with state-of-the-art CNNs.
arXiv Detail & Related papers (2023-03-03T12:05:41Z) - COVID-Rate: An Automated Framework for Segmentation of COVID-19 Lesions
from Chest CT Scans [29.266579630983358]
During pandemic era, visual assessment and quantification of COVID-19 lung lesions by expert radiologists become expensive and prone to error.
This paper introduces an open access COVID-19 CT segmentation dataset containing 433 CT images from 82 patients that have been annotated by an expert radiologist.
A Deep Neural Network (DNN)-based framework is proposed, referred to as the COVID-Rate, that autonomously segments lung abnormalities associated with COVID-19 from chest CT scans.
arXiv Detail & Related papers (2021-07-04T03:19:43Z) - CoRSAI: A System for Robust Interpretation of CT Scans of COVID-19
Patients Using Deep Learning [133.87426554801252]
We adopted an approach based on using an ensemble of deep convolutionalneural networks for segmentation of lung CT scans.
Using our models we are able to segment the lesions, evaluatepatients dynamics, estimate relative volume of lungs affected by lesions and evaluate the lung damage stage.
arXiv Detail & Related papers (2021-05-25T12:06:55Z) - Automated Estimation of Total Lung Volume using Chest Radiographs and
Deep Learning [4.874501619350224]
Total lung volume is an important quantitative biomarker and is used for the assessment of restrictive lung diseases.
This dataset was used to train deep-learning architectures to predict total lung volume from chest radiographs.
We demonstrate, for the first time, that state-of-the-art deep learning solutions can accurately measure total lung volume from plain chest radiographs.
arXiv Detail & Related papers (2021-05-03T21:35:16Z) - Deep Learning to Quantify Pulmonary Edema in Chest Radiographs [7.121765928263759]
We developed a machine learning model to classify the severity grades of pulmonary edema on chest radiographs.
Deep learning models were trained on a large chest radiograph dataset.
arXiv Detail & Related papers (2020-08-13T15:45:44Z) - Integrative Analysis for COVID-19 Patient Outcome Prediction [53.11258640541513]
We combine radiomics of lung opacities and non-imaging features from demographic data, vital signs, and laboratory findings to predict need for intensive care unit admission.
Our methods may also be applied to other lung diseases including but not limited to community acquired pneumonia.
arXiv Detail & Related papers (2020-07-20T19:08:50Z) - Improving performance of CNN to predict likelihood of COVID-19 using
chest X-ray images with preprocessing algorithms [0.3180570080674292]
The study demonstrates the feasibility of developing a computer-aided diagnosis scheme of chest X-ray images.
A dataset of 8,474 chest X-ray images is used to train and test the CNN-based CAD scheme.
The testing results achieve 94.0% of overall accuracy in classifying three classes and 98.6% accuracy in detecting Covid-19 infected cases.
arXiv Detail & Related papers (2020-06-11T16:45:46Z) - Predicting COVID-19 Pneumonia Severity on Chest X-ray with Deep Learning [57.00601760750389]
We present a severity score prediction model for COVID-19 pneumonia for frontal chest X-ray images.
Such a tool can gauge severity of COVID-19 lung infections that can be used for escalation or de-escalation of care.
arXiv Detail & Related papers (2020-05-24T23:13:16Z) - Automated Quantification of CT Patterns Associated with COVID-19 from
Chest CT [48.785596536318884]
The proposed method takes as input a non-contrasted chest CT and segments the lesions, lungs, and lobes in three dimensions.
The method outputs two combined measures of the severity of lung and lobe involvement, quantifying both the extent of COVID-19 abnormalities and presence of high opacities.
Evaluation of the algorithm is reported on CTs of 200 participants (100 COVID-19 confirmed patients and 100 healthy controls) from institutions from Canada, Europe and the United States.
arXiv Detail & Related papers (2020-04-02T21:49:14Z) - Severity Assessment of Coronavirus Disease 2019 (COVID-19) Using
Quantitative Features from Chest CT Images [54.919022945740515]
The aim of this study is to realize automatic severity assessment (non-severe or severe) of COVID-19 based on chest CT images.
A random forest (RF) model is trained to assess the severity (non-severe or severe) based on quantitative features.
Several quantitative features, which have the potential to reflect the severity of COVID-19, were revealed.
arXiv Detail & Related papers (2020-03-26T15:49:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.