Can Deep Learning Reliably Recognize Abnormality Patterns on Chest
X-rays? A Multi-Reader Study Examining One Month of AI Implementation in
Everyday Radiology Clinical Practice
- URL: http://arxiv.org/abs/2305.10116v2
- Date: Fri, 2 Jun 2023 14:18:54 GMT
- Title: Can Deep Learning Reliably Recognize Abnormality Patterns on Chest
X-rays? A Multi-Reader Study Examining One Month of AI Implementation in
Everyday Radiology Clinical Practice
- Authors: Daniel Kvak, Anna Chromcov\'a, Petra Ovesn\'a, Jakub Dand\'ar, Marek
Biro\v{s}, Robert Hrub\'y, Daniel Dufek, Marija Pajdakovi\'c
- Abstract summary: We developed a deep-learning-based automatic detection algorithm (DLAD) to detect and localize seven specific radiological findings on chest X-rays.
The proposed DLAD achieved high sensitivity (ATE 1.000 (0.624-1.000), CON 0.864 (0.671-0.956), EFF 0.953 (0.887-0.983), LES 0.905 (0.715-0.978), SCE 1.000 (0.366-1.000), CMG 0.837 (0.711-0.917), PNO 0.875 (0.538-0.986)
The findings of the study demonstrate that the suggested DLAD holds potential for integration into everyday
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this study, we developed a deep-learning-based automatic detection
algorithm (DLAD, Carebot AI CXR) to detect and localize seven specific
radiological findings (atelectasis (ATE), consolidation (CON), pleural effusion
(EFF), pulmonary lesion (LES), subcutaneous emphysema (SCE), cardiomegaly
(CMG), pneumothorax (PNO)) on chest X-rays (CXR). We collected 956 CXRs and
compared the performance of the DLAD with that of six individual radiologists
who assessed the images in a hospital setting. The proposed DLAD achieved high
sensitivity (ATE 1.000 (0.624-1.000), CON 0.864 (0.671-0.956), EFF 0.953
(0.887-0.983), LES 0.905 (0.715-0.978), SCE 1.000 (0.366-1.000), CMG 0.837
(0.711-0.917), PNO 0.875 (0.538-0.986)), even when compared to the radiologists
(LOWEST: ATE 0.000 (0.000-0.376), CON 0.182 (0.070-0.382), EFF 0.400
(0.302-0.506), LES 0.238 (0.103-0.448), SCE 0.000 (0.000-0.634), CMG 0.347
(0.228-0.486), PNO 0.375 (0.134-0.691), HIGHEST: ATE 1.000 (0.624-1.000), CON
0.864 (0.671-0.956), EFF 0.953 (0.887-0.983), LES 0.667 (0.456-0.830), SCE
1.000 (0.366-1.000), CMG 0.980 (0.896-0.999), PNO 0.875 (0.538-0.986)). The
findings of the study demonstrate that the suggested DLAD holds potential for
integration into everyday clinical practice as a decision support system,
effectively mitigating the false negative rate associated with junior and
intermediate radiologists.
Related papers
- Robust Multicentre Detection and Classification of Colorectal Liver Metastases on CT: Application of Foundation Models [11.274035647041762]
We developed a foundation model-based AI pipeline for patient-level classification and lesion-level detection of CRLM on CT.<n> UMedPT achieved the best performance and was fine-tuned with a head for classification and an FCOS-based head for lesion detection.
arXiv Detail & Related papers (2026-01-12T14:35:29Z) - Explainable Admission-Level Predictive Modeling for Prolonged Hospital Stay in Elderly Populations: Challenges in Low- and Middle-Income Countries [65.4286079244589]
Prolonged length of stay (pLoS) is a significant factor associated with the risk of adverse in-hospital events.<n>We develop and explain a predictive model for pLos using admission-level patient and hospital administrative data.
arXiv Detail & Related papers (2026-01-07T23:35:24Z) - Recognizing Pneumonia in Real-World Chest X-rays with a Classifier Trained with Images Synthetically Generated by Nano Banana [6.19177957021714]
We trained a classifier with synthetic chest X-ray (CXR) images generated by Nano Banana, the latest AI model for image generation and editing, released by Google.<n>When directly applied to real-world CXRs having only been trained with synthetic data, the classifier achieved an AUROC of 0.923 and an AUPR of 0.900.<n>These external validation results on real-world data demonstrate the feasibility of this approach and suggest potential for synthetic data in medical AI development.
arXiv Detail & Related papers (2025-11-29T10:05:44Z) - A Novel Attention-Augmented Wavelet YOLO System for Real-time Brain Vessel Segmentation on Transcranial Color-coded Doppler [49.03919553747297]
We propose an AI-powered, real-time CoW auto-segmentation system capable of efficiently capturing cerebral arteries.<n>No prior studies have explored AI-driven cerebrovascular segmentation using Transcranial Color-coded Doppler (TCCD)<n>The proposed AAW-YOLO demonstrated strong performance in segmenting both ipsilateral and contralateral CoW vessels.
arXiv Detail & Related papers (2025-08-19T14:41:22Z) - Predicting Length of Stay in Neurological ICU Patients Using Classical Machine Learning and Neural Network Models: A Benchmark Study on MIMIC-IV [49.1574468325115]
This study explores multiple ML approaches for predicting LOS in ICU specifically for the patients with neurological diseases based on the MIMIC-IV dataset.<n>The evaluated models include classic ML algorithms (K-Nearest Neighbors, Random Forest, XGBoost and CatBoost) and Neural Networks (LSTM, BERT and Temporal Fusion Transformer)
arXiv Detail & Related papers (2025-05-23T14:06:42Z) - Integrating Deep Learning with Fundus and Optical Coherence Tomography for Cardiovascular Disease Prediction [47.7045293755736]
Early identification of patients at risk of cardiovascular diseases (CVD) is crucial for effective preventive care, reducing healthcare burden, and improving patients' quality of life.
This study demonstrates the potential of retinal optical coherence tomography ( OCT) imaging combined with fundus photographs for identifying future adverse cardiac events.
We propose a novel binary classification network based on a Multi-channel Variational Autoencoder (MCVAE), which learns a latent embedding of patients' fundus and OCT images to classify individuals into two groups: those likely to develop CVD in the future and those who are not.
arXiv Detail & Related papers (2024-10-18T12:37:51Z) - Deep Learning Segmentation of Ascites on Abdominal CT Scans for Automatic Volume Quantification [12.25110399510034]
This retrospective study included contrast-enhanced and non-contrast abdominal-pelvic CT scans of patients with cirrhotic ascites and patients with ovarian cancer.
The model was trained on The Cancer Genome Atlas Ovarian Cancer dataset (mean age, 60 years +/- 11 [s.d.]; 143 female)
Its performance was measured by the Dice coefficient, standard deviations, and 95% confidence intervals, focusing on ascites volume in the peritoneal cavity.
arXiv Detail & Related papers (2024-06-23T01:32:53Z) - Detection of subclinical atherosclerosis by image-based deep learning on chest x-ray [86.38767955626179]
Deep-learning algorithm to predict coronary artery calcium (CAC) score was developed on 460 chest x-ray.
The diagnostic accuracy of the AICAC model assessed by the area under the curve (AUC) was the primary outcome.
arXiv Detail & Related papers (2024-03-27T16:56:14Z) - Using YOLO v7 to Detect Kidney in Magnetic Resonance Imaging [1.1567496318601842]
We developed a high-performing model for kidney detection using a semi-supervised approach with a medical image library.
Further external validation is required to assess the model's generalizability.
arXiv Detail & Related papers (2024-02-08T16:54:20Z) - Improving Fairness of Automated Chest X-ray Diagnosis by Contrastive
Learning [19.948079693716075]
Our proposed AI model utilizes supervised contrastive learning to minimize bias in CXR diagnosis.
We evaluated the methods on two datasets: the Medical Imaging and Data Resource Center (MIDRC) dataset with 77,887 CXR images and the NIH Chest X-ray dataset with 112,120 CXR images.
arXiv Detail & Related papers (2024-01-25T20:03:57Z) - Attention-based Saliency Maps Improve Interpretability of Pneumothorax
Classification [52.77024349608834]
To investigate chest radiograph (CXR) classification performance of vision transformers (ViT) and interpretability of attention-based saliency.
ViTs were fine-tuned for lung disease classification using four public data sets: CheXpert, Chest X-Ray 14, MIMIC CXR, and VinBigData.
ViTs had comparable CXR classification AUCs compared with state-of-the-art CNNs.
arXiv Detail & Related papers (2023-03-03T12:05:41Z) - Learning to diagnose cirrhosis from radiological and histological labels
with joint self and weakly-supervised pretraining strategies [62.840338941861134]
We propose to leverage transfer learning from large datasets annotated by radiologists, to predict the histological score available on a small annex dataset.
We compare different pretraining methods, namely weakly-supervised and self-supervised ones, to improve the prediction of the cirrhosis.
This method outperforms the baseline classification of the METAVIR score, reaching an AUC of 0.84 and a balanced accuracy of 0.75.
arXiv Detail & Related papers (2023-02-16T17:06:23Z) - Deep-Learning Tool for Early Identifying Non-Traumatic Intracranial
Hemorrhage Etiology based on CT Scan [40.51754649947294]
The deep learning model was developed with 1868 eligible NCCT scans with non-traumatic ICH collected between January 2011 and April 2018.
The model's diagnostic performance was compared with clinicians's performance.
The clinicians achieve significant improvements in the sensitivity, specificity, and accuracy of diagnoses of certain hemorrhage etiologies with proposed system augmentation.
arXiv Detail & Related papers (2023-02-02T08:45:17Z) - DeepCOVID-Fuse: A Multi-modality Deep Learning Model Fusing Chest
X-Radiographs and Clinical Variables to Predict COVID-19 Risk Levels [8.593516170110203]
DeepCOVID-Fuse is a deep learning fusion model to predict risk levels in coronavirus patients.
The accuracy of DeepCOVID-Fuse trained on CXRs and clinical variables is 0.658, with an AUC of 0.842.
arXiv Detail & Related papers (2023-01-20T20:54:25Z) - A Deep Learning Based Workflow for Detection of Lung Nodules With Chest
Radiograph [0.0]
We built a segmentation model to identify lung areas from CXRs, and sliced them into 16 patches.
These labeled patches were then used to train finetune a deep neural network(DNN) model, classifying the patches as positive or negative.
arXiv Detail & Related papers (2021-12-19T16:19:46Z) - Controlling False Positive/Negative Rates for Deep-Learning-Based
Prostate Cancer Detection on Multiparametric MR images [58.85481248101611]
We propose a novel PCa detection network that incorporates a lesion-level cost-sensitive loss and an additional slice-level loss based on a lesion-to-slice mapping function.
Our experiments based on 290 clinical patients concludes that 1) The lesion-level FNR was effectively reduced from 0.19 to 0.10 and the lesion-level FPR was reduced from 1.03 to 0.66 by changing the lesion-level cost.
arXiv Detail & Related papers (2021-06-04T09:51:27Z) - Automated Quantification of CT Patterns Associated with COVID-19 from
Chest CT [48.785596536318884]
The proposed method takes as input a non-contrasted chest CT and segments the lesions, lungs, and lobes in three dimensions.
The method outputs two combined measures of the severity of lung and lobe involvement, quantifying both the extent of COVID-19 abnormalities and presence of high opacities.
Evaluation of the algorithm is reported on CTs of 200 participants (100 COVID-19 confirmed patients and 100 healthy controls) from institutions from Canada, Europe and the United States.
arXiv Detail & Related papers (2020-04-02T21:49:14Z) - Machine-Learning-Based Multiple Abnormality Prediction with Large-Scale
Chest Computed Tomography Volumes [64.21642241351857]
We curated and analyzed a chest computed tomography (CT) data set of 36,316 volumes from 19,993 unique patients.
We developed a rule-based method for automatically extracting abnormality labels from free-text radiology reports.
We also developed a model for multi-organ, multi-disease classification of chest CT volumes.
arXiv Detail & Related papers (2020-02-12T00:59:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.