Automatic classification of multiple catheters in neonatal radiographs
with deep learning
- URL: http://arxiv.org/abs/2011.07394v1
- Date: Sat, 14 Nov 2020 21:27:21 GMT
- Title: Automatic classification of multiple catheters in neonatal radiographs
with deep learning
- Authors: Robert D. E. Henderson, Xin Yi, Scott J. Adams and Paul Babyn
- Abstract summary: We develop and evaluate a deep learning algorithm to classify multiple catheters on neonatal chest and abdominal radiographs.
A convolutional neural network (CNN) was trained using a dataset of 777 neonatal chest and abdominal radiographs.
- Score: 2.256008196530956
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We develop and evaluate a deep learning algorithm to classify multiple
catheters on neonatal chest and abdominal radiographs. A convolutional neural
network (CNN) was trained using a dataset of 777 neonatal chest and abdominal
radiographs, with a split of 81%-9%-10% for training-validation-testing,
respectively. We employed ResNet-50 (a CNN), pre-trained on ImageNet. Ground
truth labelling was limited to tagging each image to indicate the presence or
absence of endotracheal tubes (ETTs), nasogastric tubes (NGTs), and umbilical
arterial and venous catheters (UACs, UVCs). The data set included 561 images
containing 2 or more catheters, 167 images with only one, and 49 with none.
Performance was measured with average precision (AP), calculated from the area
under the precision-recall curve. On our test data, the algorithm achieved an
overall AP (95% confidence interval) of 0.977 (0.679-0.999) for NGTs, 0.989
(0.751-1.000) for ETTs, 0.979 (0.873-0.997) for UACs, and 0.937 (0.785-0.984)
for UVCs. Performance was similar for the set of 58 test images consisting of 2
or more catheters, with an AP of 0.975 (0.255-1.000) for NGTs, 0.997
(0.009-1.000) for ETTs, 0.981 (0.797-0.998) for UACs, and 0.937 (0.689-0.990)
for UVCs. Our network thus achieves strong performance in the simultaneous
detection of these four catheter types. Radiologists may use such an algorithm
as a time-saving mechanism to automate reporting of catheters on radiographs.
Related papers
- Developing a Machine Learning-Based Clinical Decision Support Tool for
Uterine Tumor Imaging [0.0]
Uterine leiomyosarcoma (LMS) is a rare but aggressive malignancy.
It is difficult to differentiate LMS from degenerated leiomyoma (LM), a prevalent but benign condition.
We curated a data set of 115 axial T2-weighted MRI images from 110 patients with UTs that included five different tumor types.
We applied nnU-Net and explored the effect of training set size on performance by randomly generating subsets with 25, 45, 65 and 85 training set images.
arXiv Detail & Related papers (2023-08-20T21:46:05Z) - Uncertainty-inspired Open Set Learning for Retinal Anomaly
Identification [71.06194656633447]
We establish an uncertainty-inspired open-set (UIOS) model, which was trained with fundus images of 9 retinal conditions.
Our UIOS model with thresholding strategy achieved an F1 score of 99.55%, 97.01% and 91.91% for the internal testing set.
UIOS correctly predicted high uncertainty scores, which would prompt the need for a manual check in the datasets of non-target categories retinal diseases, low-quality fundus images, and non-fundus images.
arXiv Detail & Related papers (2023-04-08T10:47:41Z) - Improving Automated Hemorrhage Detection in Sparse-view Computed Tomography via Deep Convolutional Neural Network based Artifact Reduction [3.9874211732430447]
We trained a U-Net for artefact reduction on simulated sparse-view cranial CT scans from 3000 patients.
We also trained a convolutional neural network on fully sampled CT data from 17,545 patients for automated hemorrhage detection.
The U-Net performed superior compared to unprocessed and TV-processed images with respect to image quality and automated hemorrhage diagnosis.
arXiv Detail & Related papers (2023-03-16T14:21:45Z) - Attention-based Saliency Maps Improve Interpretability of Pneumothorax
Classification [52.77024349608834]
To investigate chest radiograph (CXR) classification performance of vision transformers (ViT) and interpretability of attention-based saliency.
ViTs were fine-tuned for lung disease classification using four public data sets: CheXpert, Chest X-Ray 14, MIMIC CXR, and VinBigData.
ViTs had comparable CXR classification AUCs compared with state-of-the-art CNNs.
arXiv Detail & Related papers (2023-03-03T12:05:41Z) - Osteoporosis Prescreening using Panoramic Radiographs through a Deep
Convolutional Neural Network with Attention Mechanism [65.70943212672023]
Deep convolutional neural network (CNN) with an attention module can detect osteoporosis on panoramic radiographs.
dataset of 70 panoramic radiographs (PRs) from 70 different subjects of age between 49 to 60 was used.
arXiv Detail & Related papers (2021-10-19T00:03:57Z) - The Report on China-Spain Joint Clinical Testing for Rapid COVID-19 Risk
Screening by Eye-region Manifestations [59.48245489413308]
We developed and tested a COVID-19 rapid prescreening model using the eye-region images captured in China and Spain with cellphone cameras.
The performance was measured using area under receiver-operating-characteristic curve (AUC), sensitivity, specificity, accuracy, and F1.
arXiv Detail & Related papers (2021-09-18T02:28:01Z) - Vision Transformers for femur fracture classification [59.99241204074268]
The Vision Transformer (ViT) was able to correctly predict 83% of the test images.
Good results were obtained in sub-fractures with the largest and richest dataset ever.
arXiv Detail & Related papers (2021-08-07T10:12:42Z) - Semi-supervised learning for generalizable intracranial hemorrhage
detection and segmentation [0.0]
We develop and evaluate a semisupervised learning model for intracranial hemorrhage detection and segmentation on an outofdistribution head CT evaluation set.
An initial "teacher" deep learning model was trained on 457 pixel-labeled head CT scans collected from one US institution from 2010-2017.
A second "student" model was trained on this combined pixel-labeled and pseudo-labeled dataset.
arXiv Detail & Related papers (2021-05-03T00:14:43Z) - Deep Learning Models for Calculation of Cardiothoracic Ratio from Chest
Radiographs for Assisted Diagnosis of Cardiomegaly [0.0]
We propose an automated method to compute the cardiothoracic ratio and detect the presence of cardiomegaly from chest radiographs.
We develop two separate models to demarcate the heart and chest regions in an X-ray image using bounding boxes and use their outputs to calculate the cardiothoracic ratio.
arXiv Detail & Related papers (2021-01-19T13:09:29Z) - Classification of COVID-19 in CT Scans using Multi-Source Transfer
Learning [91.3755431537592]
We propose the use of Multi-Source Transfer Learning to improve upon traditional Transfer Learning for the classification of COVID-19 from CT scans.
With our multi-source fine-tuning approach, our models outperformed baseline models fine-tuned with ImageNet.
Our best performing model was able to achieve an accuracy of 0.893 and a Recall score of 0.897, outperforming its baseline Recall score by 9.3%.
arXiv Detail & Related papers (2020-09-22T11:53:06Z) - Multiple resolution residual network for automatic thoracic
organs-at-risk segmentation from CT [2.9023633922848586]
We implement and evaluate a multiple resolution residual network (MRRN) for multiple normal organs-at-risk (OAR) segmentation from computed tomography (CT) images.
Our approach simultaneously combines feature streams computed at multiple image resolutions and feature levels through residual connections.
We trained our approach using 206 thoracic CT scans of lung cancer patients with 35 scans held out for validation to segment the left and right lungs, heart, esophagus, and spinal cord.
arXiv Detail & Related papers (2020-05-27T22:39:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.