BeyondCT: A deep learning model for predicting pulmonary function from chest CT scans
- URL: http://arxiv.org/abs/2408.05645v1
- Date: Sat, 10 Aug 2024 22:28:02 GMT
- Title: BeyondCT: A deep learning model for predicting pulmonary function from chest CT scans
- Authors: Kaiwen Geng, Zhiyi Shi, Xiaoyan Zhao, Alaa Ali, Jing Wang, Joseph Leader, Jiantao Pu,
- Abstract summary: The BeyondCT model was developed to predict forced vital capacity (FVC) and forced expiratory volume in one second (FEV1) from non-contrasted inspiratory chest CT scans.
The model showed robust performance in predicting lung function from non-contrast inspiratory chest CT scans.
- Score: 2.602923751641061
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Abstract Background: Pulmonary function tests (PFTs) and computed tomography (CT) imaging are vital in diagnosing, managing, and monitoring lung diseases. A common issue in practice is the lack of access to recorded pulmonary functions despite available chest CT scans. Purpose: To develop and validate a deep learning algorithm for predicting pulmonary function directly from chest CT scans. Methods: The development cohort came from the Pittsburgh Lung Screening Study (PLuSS) (n=3619). The validation cohort came from the Specialized Centers of Clinically Oriented Research (SCCOR) in COPD (n=662). A deep learning model called BeyondCT, combining a three-dimensional (3D) convolutional neural network (CNN) and Vision Transformer (ViT) architecture, was used to predict forced vital capacity (FVC) and forced expiratory volume in one second (FEV1) from non-contrasted inspiratory chest CT scans. A 3D CNN model without ViT was used for comparison. Subject demographics (age, gender, smoking status) were also incorporated into the model. Performance was compared to actual PFTs using mean absolute error (MAE, L), percentage error, and R square. Results: The 3D-CNN model achieved MAEs of 0.395 L and 0.383 L, percentage errors of 13.84% and 18.85%, and R square of 0.665 and 0.679 for FVC and FEV1, respectively. The BeyondCT model without demographics had MAEs of 0.362 L and 0.371 L, percentage errors of 10.89% and 14.96%, and R square of 0.719 and 0.727, respectively. Including demographics improved performance (p<0.05), with MAEs of 0.356 L and 0.353 L, percentage errors of 10.79% and 14.82%, and R square of 0.77 and 0.739 for FVC and FEV1 in the test set. Conclusion: The BeyondCT model showed robust performance in predicting lung function from non-contrast inspiratory chest CT scans.
Related papers
- Detection of subclinical atherosclerosis by image-based deep learning on chest x-ray [86.38767955626179]
Deep-learning algorithm to predict coronary artery calcium (CAC) score was developed on 460 chest x-ray.
The diagnostic accuracy of the AICAC model assessed by the area under the curve (AUC) was the primary outcome.
arXiv Detail & Related papers (2024-03-27T16:56:14Z) - Attention-based Saliency Maps Improve Interpretability of Pneumothorax
Classification [52.77024349608834]
To investigate chest radiograph (CXR) classification performance of vision transformers (ViT) and interpretability of attention-based saliency.
ViTs were fine-tuned for lung disease classification using four public data sets: CheXpert, Chest X-Ray 14, MIMIC CXR, and VinBigData.
ViTs had comparable CXR classification AUCs compared with state-of-the-art CNNs.
arXiv Detail & Related papers (2023-03-03T12:05:41Z) - U-Net-based Lung Thickness Map for Pixel-level Lung Volume Estimation of Chest X-rays [4.595143640439819]
We aimed to estimate the total lung volume (TLV) from real and synthetic frontal X-ray radiographs on a pixel level using lung thickness maps generated UNet.
A U-Net model was trained and tested on synthetic radiographs from the public datasets to predict lung maps and consequently estimate TLV.
arXiv Detail & Related papers (2021-10-24T19:09:28Z) - Automated Estimation of Total Lung Volume using Chest Radiographs and
Deep Learning [4.874501619350224]
Total lung volume is an important quantitative biomarker and is used for the assessment of restrictive lung diseases.
This dataset was used to train deep-learning architectures to predict total lung volume from chest radiographs.
We demonstrate, for the first time, that state-of-the-art deep learning solutions can accurately measure total lung volume from plain chest radiographs.
arXiv Detail & Related papers (2021-05-03T21:35:16Z) - Rapid quantification of COVID-19 pneumonia burden from computed
tomography with convolutional LSTM networks [1.0072268949897432]
We propose a new fully automated deep learning framework for rapid quantification and differentiation between lung lesions in COVID-19 pneumonia.
The performance of the method was evaluated on CT data sets from 197 patients with positive reverse transcription polymerase chain reaction test result for SARS-CoV-2.
arXiv Detail & Related papers (2021-03-31T22:09:14Z) - FLANNEL: Focal Loss Based Neural Network Ensemble for COVID-19 Detection [61.04937460198252]
We construct the X-ray imaging data from 2874 patients with four classes: normal, bacterial pneumonia, non-COVID-19 viral pneumonia, and COVID-19.
To identify COVID-19, we propose a Focal Loss Based Neural Ensemble Network (FLANNEL)
FLANNEL consistently outperforms baseline models on COVID-19 identification task in all metrics.
arXiv Detail & Related papers (2020-10-30T03:17:31Z) - Deep Learning to Quantify Pulmonary Edema in Chest Radiographs [7.121765928263759]
We developed a machine learning model to classify the severity grades of pulmonary edema on chest radiographs.
Deep learning models were trained on a large chest radiograph dataset.
arXiv Detail & Related papers (2020-08-13T15:45:44Z) - Multiple resolution residual network for automatic thoracic
organs-at-risk segmentation from CT [2.9023633922848586]
We implement and evaluate a multiple resolution residual network (MRRN) for multiple normal organs-at-risk (OAR) segmentation from computed tomography (CT) images.
Our approach simultaneously combines feature streams computed at multiple image resolutions and feature levels through residual connections.
We trained our approach using 206 thoracic CT scans of lung cancer patients with 35 scans held out for validation to segment the left and right lungs, heart, esophagus, and spinal cord.
arXiv Detail & Related papers (2020-05-27T22:39:09Z) - Predicting COVID-19 Pneumonia Severity on Chest X-ray with Deep Learning [57.00601760750389]
We present a severity score prediction model for COVID-19 pneumonia for frontal chest X-ray images.
Such a tool can gauge severity of COVID-19 lung infections that can be used for escalation or de-escalation of care.
arXiv Detail & Related papers (2020-05-24T23:13:16Z) - Automated Quantification of CT Patterns Associated with COVID-19 from
Chest CT [48.785596536318884]
The proposed method takes as input a non-contrasted chest CT and segments the lesions, lungs, and lobes in three dimensions.
The method outputs two combined measures of the severity of lung and lobe involvement, quantifying both the extent of COVID-19 abnormalities and presence of high opacities.
Evaluation of the algorithm is reported on CTs of 200 participants (100 COVID-19 confirmed patients and 100 healthy controls) from institutions from Canada, Europe and the United States.
arXiv Detail & Related papers (2020-04-02T21:49:14Z) - Severity Assessment of Coronavirus Disease 2019 (COVID-19) Using
Quantitative Features from Chest CT Images [54.919022945740515]
The aim of this study is to realize automatic severity assessment (non-severe or severe) of COVID-19 based on chest CT images.
A random forest (RF) model is trained to assess the severity (non-severe or severe) based on quantitative features.
Several quantitative features, which have the potential to reflect the severity of COVID-19, were revealed.
arXiv Detail & Related papers (2020-03-26T15:49:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.