Incremental Value and Interpretability of Radiomics Features of Both
Lung and Epicardial Adipose Tissue for Detecting the Severity of COVID-19
Infection
- URL: http://arxiv.org/abs/2301.12340v2
- Date: Wed, 6 Dec 2023 19:53:05 GMT
- Title: Incremental Value and Interpretability of Radiomics Features of Both
Lung and Epicardial Adipose Tissue for Detecting the Severity of COVID-19
Infection
- Authors: Ni Yao, Yanhui Tian, Daniel Gama das Neves, Chen Zhao, Claudio Tinoco
Mesquita, Wolney de Andrade Martins, Alair Augusto Sarmet Moreira Damas dos
Santos, Yanting Li, Chuang Han, Fubao Zhu, Neng Dai, Weihua Zhou
- Abstract summary: Current segmentation methods do not consider positional information.
The detection of COVID-19 lacks severity consideration for EAT radiomics features, which limits interpretability.
This study investigates the use of radiomics features from EAT and lungs to detect the severity of COVID-19 infections.
- Score: 4.772846544299196
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Epicardial adipose tissue (EAT) is known for its pro-inflammatory properties
and association with Coronavirus Disease 2019 (COVID-19) severity. However,
current EAT segmentation methods do not consider positional information.
Additionally, the detection of COVID-19 severity lacks consideration for EAT
radiomics features, which limits interpretability. This study investigates the
use of radiomics features from EAT and lungs to detect the severity of COVID-19
infections. A retrospective analysis of 515 patients with COVID-19 (Cohort1:
415, Cohort2: 100) was conducted using a proposed three-stage deep learning
approach for EAT extraction. Lung segmentation was achieved using a published
method. A hybrid model for detecting the severity of COVID-19 was built in a
derivation cohort, and its performance and uncertainty were evaluated in
internal (125, Cohort1) and external (100, Cohort2) validation cohorts. For EAT
extraction, the Dice similarity coefficients (DSC) of the two centers were
0.972 (+-0.011) and 0.968 (+-0.005), respectively. For severity detection, the
hybrid model with radiomics features of both lungs and EAT showed improvements
in AUC, net reclassification improvement (NRI), and integrated discrimination
improvement (IDI) compared to the model with only lung radiomics features. The
hybrid model exhibited an increase of 0.1 (p<0.001), 19.3%, and 18.0%
respectively, in the internal validation cohort and an increase of 0.09
(p<0.001), 18.0%, and 18.0%, respectively, in the external validation cohort
while outperforming existing detection methods. Uncertainty quantification and
radiomics features analysis confirmed the interpretability of case prediction
after inclusion of EAT features.
Related papers
- Improving Fairness of Automated Chest X-ray Diagnosis by Contrastive
Learning [19.948079693716075]
Our proposed AI model utilizes supervised contrastive learning to minimize bias in CXR diagnosis.
We evaluated the methods on two datasets: the Medical Imaging and Data Resource Center (MIDRC) dataset with 77,887 CXR images and the NIH Chest X-ray dataset with 112,120 CXR images.
arXiv Detail & Related papers (2024-01-25T20:03:57Z) - Deep learning automated quantification of lung disease in pulmonary
hypertension on CT pulmonary angiography: A preliminary clinical study with
external validation [0.0]
This study aims to develop an artificial intelligence (AI) deep learning model for lung texture classification in CT Pulmonary Angiography (CTPA)
"Normal", "Ground glass", "Ground glass with reticulation", "Honeycombing", and "Emphysema" were classified as per the Fleishner Society glossary of terms.
Proportion of lung volume for each texture was calculated by classifying patches throughout the entire lung volume to generate a coarse texture classification mapping throughout the lung parenchyma.
arXiv Detail & Related papers (2023-03-20T14:06:32Z) - Automated assessment of disease severity of COVID-19 using artificial
intelligence with synthetic chest CT [13.44182694693376]
We incorporated data augmentation to generate synthetic chest CT images using public available datasets.
The synthetic images and masks were used to train a 2D U-net neural network and tested on 203 COVID-19 datasets.
arXiv Detail & Related papers (2021-12-11T02:03:30Z) - Lung Ultrasound Segmentation and Adaptation between COVID-19 and
Community-Acquired Pneumonia [0.17159130619349347]
We focus on the hyperechoic B-line segmentation task using deep neural networks.
We utilize both COVID-19 and CAP lung ultrasound data to train the networks.
Segmenting either type of lung condition at inference may support a range of clinical applications.
arXiv Detail & Related papers (2021-08-06T14:17:51Z) - Quantification of pulmonary involvement in COVID-19 pneumonia by means
of a cascade oftwo U-nets: training and assessment on multipledatasets using
different annotation criteria [83.83783947027392]
This study aims at exploiting Artificial intelligence (AI) for the identification, segmentation and quantification of COVID-19 pulmonary lesions.
We developed an automated analysis pipeline, the LungQuant system, based on a cascade of two U-nets.
The accuracy in predicting the CT-Severity Score (CT-SS) of the LungQuant system has been also evaluated.
arXiv Detail & Related papers (2021-05-06T10:21:28Z) - FLANNEL: Focal Loss Based Neural Network Ensemble for COVID-19 Detection [61.04937460198252]
We construct the X-ray imaging data from 2874 patients with four classes: normal, bacterial pneumonia, non-COVID-19 viral pneumonia, and COVID-19.
To identify COVID-19, we propose a Focal Loss Based Neural Ensemble Network (FLANNEL)
FLANNEL consistently outperforms baseline models on COVID-19 identification task in all metrics.
arXiv Detail & Related papers (2020-10-30T03:17:31Z) - Integrative Analysis for COVID-19 Patient Outcome Prediction [53.11258640541513]
We combine radiomics of lung opacities and non-imaging features from demographic data, vital signs, and laboratory findings to predict need for intensive care unit admission.
Our methods may also be applied to other lung diseases including but not limited to community acquired pneumonia.
arXiv Detail & Related papers (2020-07-20T19:08:50Z) - Machine Learning Automatically Detects COVID-19 using Chest CTs in a
Large Multicenter Cohort [43.99203831722203]
Our retrospective study obtained 2096 chest CTs from 16 institutions.
A metric-based approach for classification of COVID-19 used interpretable features.
A deep learning-based classifier differentiated COVID-19 via 3D features extracted from CT attenuation and probability distribution of airspace opacities.
arXiv Detail & Related papers (2020-06-09T00:40:35Z) - Dual-Sampling Attention Network for Diagnosis of COVID-19 from Community
Acquired Pneumonia [46.521323145636906]
We develop a dual-sampling attention network to automatically diagnose COVID-19 from the community acquired pneumonia (CAP) in chest computed tomography (CT)
In particular, we propose a novel online attention module with a 3D convolutional network (CNN) to focus on the infection regions in lungs when making decisions of diagnoses.
Our algorithm can identify the COVID-19 images with the area under the receiver operating characteristic curve (AUC) value of 0.944, accuracy of 87.5%, sensitivity of 86.9%, specificity of 90.1%, and F1-score of 82.0%.
arXiv Detail & Related papers (2020-05-06T09:56:51Z) - Automated Quantification of CT Patterns Associated with COVID-19 from
Chest CT [48.785596536318884]
The proposed method takes as input a non-contrasted chest CT and segments the lesions, lungs, and lobes in three dimensions.
The method outputs two combined measures of the severity of lung and lobe involvement, quantifying both the extent of COVID-19 abnormalities and presence of high opacities.
Evaluation of the algorithm is reported on CTs of 200 participants (100 COVID-19 confirmed patients and 100 healthy controls) from institutions from Canada, Europe and the United States.
arXiv Detail & Related papers (2020-04-02T21:49:14Z) - Severity Assessment of Coronavirus Disease 2019 (COVID-19) Using
Quantitative Features from Chest CT Images [54.919022945740515]
The aim of this study is to realize automatic severity assessment (non-severe or severe) of COVID-19 based on chest CT images.
A random forest (RF) model is trained to assess the severity (non-severe or severe) based on quantitative features.
Several quantitative features, which have the potential to reflect the severity of COVID-19, were revealed.
arXiv Detail & Related papers (2020-03-26T15:49:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.