Sensitivity and Specificity Evaluation of Deep Learning Models for
Detection of Pneumoperitoneum on Chest Radiographs
- URL: http://arxiv.org/abs/2010.08872v1
- Date: Sat, 17 Oct 2020 21:41:53 GMT
- Title: Sensitivity and Specificity Evaluation of Deep Learning Models for
Detection of Pneumoperitoneum on Chest Radiographs
- Authors: Manu Goyal, Judith Austin-Strohbehn, Sean J. Sun, Karen Rodriguez,
Jessica M. Sin, Yvonne Y. Cheung and Saeed Hassanpour
- Abstract summary: State-of-the-art deep learning models (ResNet101, InceptionV3, DenseNet161, and ResNeXt101) were trained on a subset of this dataset.
DenseNet161 model was able to accurately classify radiographs from different imaging systems.
- Score: 0.8437813529429724
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Background: Deep learning has great potential to assist with detecting and
triaging critical findings such as pneumoperitoneum on medical images. To be
clinically useful, the performance of this technology still needs to be
validated for generalizability across different types of imaging systems.
Materials and Methods: This retrospective study included 1,287 chest X-ray
images of patients who underwent initial chest radiography at 13 different
hospitals between 2011 and 2019. The chest X-ray images were labelled
independently by four radiologist experts as positive or negative for
pneumoperitoneum. State-of-the-art deep learning models (ResNet101,
InceptionV3, DenseNet161, and ResNeXt101) were trained on a subset of this
dataset, and the automated classification performance was evaluated on the rest
of the dataset by measuring the AUC, sensitivity, and specificity for each
model. Furthermore, the generalizability of these deep learning models was
assessed by stratifying the test dataset according to the type of the utilized
imaging systems. Results: All deep learning models performed well for
identifying radiographs with pneumoperitoneum, while DenseNet161 achieved the
highest AUC of 95.7%, Specificity of 89.9%, and Sensitivity of 91.6%.
DenseNet161 model was able to accurately classify radiographs from different
imaging systems (Accuracy: 90.8%), while it was trained on images captured from
a specific imaging system from a single institution. This result suggests the
generalizability of our model for learning salient features in chest X-ray
images to detect pneumoperitoneum, independent of the imaging system.
Related papers
- Fast-staged CNN Model for Accurate pulmonary diseases and Lung cancer detection [0.0]
This research evaluates a deep learning model designed to detect lung cancer, specifically pulmonary nodules, along with eight other lung pathologies, using chest radiographs.
A two-stage classification system, utilizing ensemble methods and transfer learning, is employed to first triage images into Normal or Abnormal.
The model achieves notable results in classification, with a top-performing accuracy of 77%, a sensitivity of 0.713, a specificity of 0.776 during external validation, and an AUC score of 0.888.
arXiv Detail & Related papers (2024-12-16T11:47:07Z) - AttCDCNet: Attention-enhanced Chest Disease Classification using X-Ray Images [0.0]
We propose a novel detection model named textbfAttCDCNet for the task of X-ray image diagnosis.
The proposed model achieved an accuracy, precision and recall of 94.94%, 95.14% and 94.53%, respectively, on the COVID-19 Radiography dataset.
arXiv Detail & Related papers (2024-10-20T16:08:20Z) - Classification of lung cancer subtypes on CT images with synthetic
pathological priors [41.75054301525535]
Cross-scale associations exist in the image patterns between the same case's CT images and its pathological images.
We propose self-generating hybrid feature network (SGHF-Net) for accurately classifying lung cancer subtypes on CT images.
arXiv Detail & Related papers (2023-08-09T02:04:05Z) - Enhancing COVID-19 Diagnosis through Vision Transformer-Based Analysis
of Chest X-ray Images [0.0]
The research endeavor posits an innovative framework for the automated diagnosis of COVID-19, harnessing raw chest X-ray images.
The developed models were appraised in terms of their binary classification performance, discerning COVID-19 from Normal cases.
The proposed model evinced extraordinary precision, registering results of 99.92% and 99.84% for binary classification, 97.95% and 86.48% for ternary classification, and 86.81% for quaternary classification, respectively.
arXiv Detail & Related papers (2023-06-12T07:34:28Z) - Attention-based Saliency Maps Improve Interpretability of Pneumothorax
Classification [52.77024349608834]
To investigate chest radiograph (CXR) classification performance of vision transformers (ViT) and interpretability of attention-based saliency.
ViTs were fine-tuned for lung disease classification using four public data sets: CheXpert, Chest X-Ray 14, MIMIC CXR, and VinBigData.
ViTs had comparable CXR classification AUCs compared with state-of-the-art CNNs.
arXiv Detail & Related papers (2023-03-03T12:05:41Z) - COVID-19 Detection Based on Self-Supervised Transfer Learning Using
Chest X-Ray Images [38.65823547986758]
We propose a new learning scheme called self-supervised transfer learning for detecting COVID-19 from chest X-ray (CXR) images.
We provide quantitative evaluation on the largest open COVID-19 CXR dataset and qualitative results for visual inspection.
arXiv Detail & Related papers (2022-12-19T07:10:51Z) - VinDr-SpineXR: A deep learning framework for spinal lesions detection
and classification from radiographs [0.812774532310979]
This work aims at developing and evaluating a deep learning-based framework, named VinDr-SpineXR, for the classification and localization of abnormalities from spine X-rays.
We build a large dataset, comprising 10,468 spine X-ray images from 5,000 studies, each of which is manually annotated by an experienced radiologist with bounding boxes around abnormal findings in 13 categories.
The VinDr-SpineXR is evaluated on a test set of 2,078 images from 1,000 studies, which is kept separate from the training set.
arXiv Detail & Related papers (2021-06-24T11:45:44Z) - M3Lung-Sys: A Deep Learning System for Multi-Class Lung Pneumonia
Screening from CT Imaging [85.00066186644466]
We propose a Multi-task Multi-slice Deep Learning System (M3Lung-Sys) for multi-class lung pneumonia screening from CT imaging.
In addition to distinguish COVID-19 from Healthy, H1N1, and CAP cases, our M 3 Lung-Sys also be able to locate the areas of relevant lesions.
arXiv Detail & Related papers (2020-10-07T06:22:24Z) - Reliable Tuberculosis Detection using Chest X-ray with Deep Learning,
Segmentation and Visualization [0.0]
Tuberculosis is a chronic lung disease that occurs due to bacterial infection and is one of the top 10 leading causes of death.
We have detected TB reliably from the chest X-ray images using image pre-processing, data augmentation, image segmentation, and deep-learning classification techniques.
arXiv Detail & Related papers (2020-07-29T15:11:34Z) - Predicting COVID-19 Pneumonia Severity on Chest X-ray with Deep Learning [57.00601760750389]
We present a severity score prediction model for COVID-19 pneumonia for frontal chest X-ray images.
Such a tool can gauge severity of COVID-19 lung infections that can be used for escalation or de-escalation of care.
arXiv Detail & Related papers (2020-05-24T23:13:16Z) - Y-Net for Chest X-Ray Preprocessing: Simultaneous Classification of
Geometry and Segmentation of Annotations [70.0118756144807]
This work introduces a general pre-processing step for chest x-ray input into machine learning algorithms.
A modified Y-Net architecture based on the VGG11 encoder is used to simultaneously learn geometric orientation and segmentation of radiographs.
Results were evaluated by expert clinicians, with acceptable geometry in 95.8% and annotation mask in 96.2%, compared to 27.0% and 34.9% respectively in control images.
arXiv Detail & Related papers (2020-05-08T02:16:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.