DeepCOVID-Fuse: A Multi-modality Deep Learning Model Fusing Chest
X-Radiographs and Clinical Variables to Predict COVID-19 Risk Levels
- URL: http://arxiv.org/abs/2301.08798v1
- Date: Fri, 20 Jan 2023 20:54:25 GMT
- Title: DeepCOVID-Fuse: A Multi-modality Deep Learning Model Fusing Chest
X-Radiographs and Clinical Variables to Predict COVID-19 Risk Levels
- Authors: Yunan Wu, Amil Dravid, Ramsey Michael Wehbe, Aggelos K. Katsaggelos
- Abstract summary: DeepCOVID-Fuse is a deep learning fusion model to predict risk levels in coronavirus patients.
The accuracy of DeepCOVID-Fuse trained on CXRs and clinical variables is 0.658, with an AUC of 0.842.
- Score: 8.593516170110203
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Propose: To present DeepCOVID-Fuse, a deep learning fusion model to predict
risk levels in patients with confirmed coronavirus disease 2019 (COVID-19) and
to evaluate the performance of pre-trained fusion models on full or partial
combination of chest x-ray (CXRs) or chest radiograph and clinical variables.
Materials and Methods: The initial CXRs, clinical variables and outcomes
(i.e., mortality, intubation, hospital length of stay, ICU admission) were
collected from February 2020 to April 2020 with reverse-transcription
polymerase chain reaction (RT-PCR) test results as the reference standard. The
risk level was determined by the outcome. The fusion model was trained on 1657
patients (Age: 58.30 +/- 17.74; Female: 807) and validated on 428 patients
(56.41 +/- 17.03; 190) from Northwestern Memorial HealthCare system and was
tested on 439 patients (56.51 +/- 17.78; 205) from a single holdout hospital.
Performance of pre-trained fusion models on full or partial modalities were
compared on the test set using the DeLong test for the area under the receiver
operating characteristic curve (AUC) and the McNemar test for accuracy,
precision, recall and F1.
Results: The accuracy of DeepCOVID-Fuse trained on CXRs and clinical
variables is 0.658, with an AUC of 0.842, which significantly outperformed (p <
0.05) models trained only on CXRs with an accuracy of 0.621 and AUC of 0.807
and only on clinical variables with an accuracy of 0.440 and AUC of 0.502. The
pre-trained fusion model with only CXRs as input increases accuracy to 0.632
and AUC to 0.813 and with only clinical variables as input increases accuracy
to 0.539 and AUC to 0.733.
Conclusion: The fusion model learns better feature representations across
different modalities during training and achieves good outcome predictions even
when only some of the modalities are used in testing.
Related papers
- Multi-modal AI for comprehensive breast cancer prognostication [18.691704371847855]
We developed a test for breast cancer patient stratification based on digital pathology and clinical characteristics using novel AI methods.
The test was developed and evaluated using data from a total of 8,161 breast cancer patients across 15 cohorts.
Results suggest that our AI test can improve accuracy, extend applicability to a wider range of patients, and enhance access to treatment selection tools.
arXiv Detail & Related papers (2024-10-28T17:54:29Z) - Detection of subclinical atherosclerosis by image-based deep learning on chest x-ray [86.38767955626179]
Deep-learning algorithm to predict coronary artery calcium (CAC) score was developed on 460 chest x-ray.
The diagnostic accuracy of the AICAC model assessed by the area under the curve (AUC) was the primary outcome.
arXiv Detail & Related papers (2024-03-27T16:56:14Z) - A new method of modeling the multi-stage decision-making process of CRT using machine learning with uncertainty quantification [8.540186345787244]
The purpose of this study is to create a multi-stage machine learning model to predict cardiac resynchronization therapy (CRT) response for heart failure patients.
arXiv Detail & Related papers (2023-09-15T14:18:53Z) - Attention-based Saliency Maps Improve Interpretability of Pneumothorax
Classification [52.77024349608834]
To investigate chest radiograph (CXR) classification performance of vision transformers (ViT) and interpretability of attention-based saliency.
ViTs were fine-tuned for lung disease classification using four public data sets: CheXpert, Chest X-Ray 14, MIMIC CXR, and VinBigData.
ViTs had comparable CXR classification AUCs compared with state-of-the-art CNNs.
arXiv Detail & Related papers (2023-03-03T12:05:41Z) - A Generalizable Artificial Intelligence Model for COVID-19
Classification Task Using Chest X-ray Radiographs: Evaluated Over Four
Clinical Datasets with 15,097 Patients [6.209420804714487]
The generalizability of the trained model was retrospectively evaluated using four different real-world clinical datasets.
The AI model trained using a single-source clinical dataset achieved an AUC of 0.82 when applied to the internal temporal test set.
An AUC of 0.79 was achieved when applied to a multi-institutional COVID-19 dataset collected by the Medical Imaging and Data Resource Center.
arXiv Detail & Related papers (2022-10-04T04:12:13Z) - Multi-institutional Validation of Two-Streamed Deep Learning Method for
Automated Delineation of Esophageal Gross Tumor Volume using planning-CT and
FDG-PETCT [14.312659667401302]
Current clinical workflow for esophageal gross tumor volume (GTV) contouring relies on manual delineation of high labor-costs and interuser variability.
To validate the clinical applicability of a deep learning (DL) multi-modality esophageal GTV contouring model, developed at 1 institution whereas tested at multiple ones.
arXiv Detail & Related papers (2021-10-11T13:56:09Z) - The Report on China-Spain Joint Clinical Testing for Rapid COVID-19 Risk
Screening by Eye-region Manifestations [59.48245489413308]
We developed and tested a COVID-19 rapid prescreening model using the eye-region images captured in China and Spain with cellphone cameras.
The performance was measured using area under receiver-operating-characteristic curve (AUC), sensitivity, specificity, accuracy, and F1.
arXiv Detail & Related papers (2021-09-18T02:28:01Z) - Deep learning-based COVID-19 pneumonia classification using chest CT
images: model generalizability [54.86482395312936]
Deep learning (DL) classification models were trained to identify COVID-19-positive patients on 3D computed tomography (CT) datasets from different countries.
We trained nine identical DL-based classification models by using combinations of the datasets with a 72% train, 8% validation, and 20% test data split.
The models trained on multiple datasets and evaluated on a test set from one of the datasets used for training performed better.
arXiv Detail & Related papers (2021-02-18T21:14:52Z) - CovidDeep: SARS-CoV-2/COVID-19 Test Based on Wearable Medical Sensors
and Efficient Neural Networks [51.589769497681175]
The novel coronavirus (SARS-CoV-2) has led to a pandemic.
The current testing regime based on Reverse Transcription-Polymerase Chain Reaction for SARS-CoV-2 has been unable to keep up with testing demands.
We propose a framework called CovidDeep that combines efficient DNNs with commercially available WMSs for pervasive testing of the virus.
arXiv Detail & Related papers (2020-07-20T21:47:28Z) - Predicting Clinical Outcomes in COVID-19 using Radiomics and Deep
Learning on Chest Radiographs: A Multi-Institutional Study [3.3839341058136054]
We predict mechanical ventilation requirement and mortality using computational modeling of chest radiographs (CXRs) for coronavirus disease 2019 (COVID-19) patients.
We analyzed 530 deidentified CXRs from COVID-19 patients treated at Stony Brook University Hospital and Newark Beth Israel Medical Center between March and August 2020.
arXiv Detail & Related papers (2020-07-15T22:48:11Z) - Joint Prediction and Time Estimation of COVID-19 Developing Severe
Symptoms using Chest CT Scan [49.209225484926634]
We propose a joint classification and regression method to determine whether the patient would develop severe symptoms in the later time.
To do this, the proposed method takes into account 1) the weight for each sample to reduce the outliers' influence and explore the problem of imbalance classification.
Our proposed method yields 76.97% of accuracy for predicting the severe cases, 0.524 of the correlation coefficient, and 0.55 days difference for the converted time.
arXiv Detail & Related papers (2020-05-07T12:16:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.