How many radiographs are needed to re-train a deep learning system for
object detection?
- URL: http://arxiv.org/abs/2210.08734v1
- Date: Mon, 17 Oct 2022 04:02:30 GMT
- Title: How many radiographs are needed to re-train a deep learning system for
object detection?
- Authors: Raniere Silva, Khizar Hayat, Christopher M Riggs, Michael Doube
- Abstract summary: We annotated 396 radiographs of left and right carpi dorsal 75 medial to palmarolateral oblique (DMPLO) projection with the location of radius, proximal row of carpal bones, distal row of carpal bones, accessory carpal bone, first carpal bone, if present, and metacarpus (metacarpal II, III, and IV)
Models trained using 96 radiographs or more achieved precision, recall and mAP above 0.95, including for the first carpal bone, when trained for 32 epochs.
The best model needed the double of epochs to learn to detect the first carpal bone compared with the other bones.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Background: Object detection in radiograph computer vision has largely
benefited from progress in deep convolutional neural networks and can, for
example, annotate a radiograph with a box around a knee joint or intervertebral
disc. Is deep learning capable of detect small (less than 1% of the image) in
radiographs? And how many radiographs do we need use when re-training a deep
learning model?
Methods: We annotated 396 radiographs of left and right carpi dorsal 75
medial to palmarolateral oblique (DMPLO) projection with the location of
radius, proximal row of carpal bones, distal row of carpal bones, accessory
carpal bone, first carpal bone (if present), and metacarpus (metacarpal II,
III, and IV). The radiographs and respective annotations were splited into sets
that were used to leave-one-out cross-validation of models created using
transfer learn from YOLOv5s.
Results: Models trained using 96 radiographs or more achieved precision,
recall and mAP above 0.95, including for the first carpal bone, when trained
for 32 epochs. The best model needed the double of epochs to learn to detect
the first carpal bone compared with the other bones.
Conclusions: Free and open source state of the art object detection models
based on deep learning can be re-trained for radiograph computer vision
applications with 100 radiographs and achieved precision, recall and mAP above
0.95.
Related papers
- Brain Tumor Classification on MRI in Light of Molecular Markers [61.77272414423481]
Co-deletion of the 1p/19q gene is associated with clinical outcomes in low-grade gliomas.
This study aims to utilize a specially MRI-based convolutional neural network for brain cancer detection.
arXiv Detail & Related papers (2024-09-29T07:04:26Z) - Self-supervised vision-langage alignment of deep learning representations for bone X-rays analysis [53.809054774037214]
This paper proposes leveraging vision-language pretraining on bone X-rays paired with French reports.
It is the first study to integrate French reports to shape the embedding space devoted to bone X-Rays representations.
arXiv Detail & Related papers (2024-05-14T19:53:20Z) - Interpretation of Chest x-rays affected by bullets using deep transfer
learning [0.8189696720657246]
Deep learning in radiology provides the opportunity to classify, detect and segment different diseases automatically.
In the proposed study, we worked on a non-trivial aspect of medical imaging where we classified and localized the X-Rays affected by bullets.
This is the first study on the detection and classification of radiographs affected by bullets using deep learning.
arXiv Detail & Related papers (2022-03-25T05:53:45Z) - Osteoporosis Prescreening using Panoramic Radiographs through a Deep
Convolutional Neural Network with Attention Mechanism [65.70943212672023]
Deep convolutional neural network (CNN) with an attention module can detect osteoporosis on panoramic radiographs.
dataset of 70 panoramic radiographs (PRs) from 70 different subjects of age between 49 to 60 was used.
arXiv Detail & Related papers (2021-10-19T00:03:57Z) - Development of the algorithm for differentiating bone metastases and
trauma of the ribs in bone scintigraphy and demonstration of visual evidence
of the algorithm -- Using only anterior bone scan view of thorax [0.0]
There is no report of an AI model that determines the accumulation of ribs in bone metastases and trauma only using the anterior image of thorax of bone scintigraphy.
We developed an algorithm to classify and diagnose whether RI accumulation on the ribs is bone metastasis or trauma using only anterior bone scan view of thorax.
arXiv Detail & Related papers (2021-09-30T23:55:31Z) - XraySyn: Realistic View Synthesis From a Single Radiograph Through CT
Priors [118.27130593216096]
A radiograph visualizes the internal anatomy of a patient through the use of X-ray, which projects 3D information onto a 2D plane.
To the best of our knowledge, this is the first work on radiograph view synthesis.
We show that by gaining an understanding of radiography in 3D space, our method can be applied to radiograph bone extraction and suppression without groundtruth bone labels.
arXiv Detail & Related papers (2020-12-04T05:08:53Z) - Pose-dependent weights and Domain Randomization for fully automatic
X-ray to CT Registration [51.280096834264256]
Fully automatic X-ray to CT registration requires an initial alignment within the capture range of existing intensity-based registrations.
This work provides a novel automatic initialization, which enables end to end registration.
The mean (+-standard deviation) target registration error in millimetres is 4.1 +- 4.3 for simulated X-rays with a success rate of 92% and 4.2 +- 3.9 for real X-rays with a success rate of 86.8%, where a success is defined as a translation error of less than 30mm.
arXiv Detail & Related papers (2020-11-14T12:50:32Z) - Joint Modeling of Chest Radiographs and Radiology Reports for Pulmonary
Edema Assessment [39.60171837961607]
We develop a neural network model that is trained on both images and free-text to assess pulmonary edema severity from chest radiographs at inference time.
Our experimental results suggest that the joint image-text representation learning improves the performance of pulmonary edema assessment.
arXiv Detail & Related papers (2020-08-22T17:28:39Z) - Evaluation of Contemporary Convolutional Neural Network Architectures
for Detecting COVID-19 from Chest Radiographs [0.0]
We train and evaluate three model architectures, proposed for chest radiograph analysis, under varying conditions.
We find issues that discount the impressive model performances proposed by contemporary studies on this subject.
arXiv Detail & Related papers (2020-06-30T15:22:39Z) - Radioactive data: tracing through training [130.2266320167683]
We propose a new technique, emphradioactive data, that makes imperceptible changes to this dataset such that any model trained on it will bear an identifiable mark.
Given a trained model, our technique detects the use of radioactive data and provides a level of confidence (p-value)
Our method is robust to data augmentation and backdoority of deep network optimization.
arXiv Detail & Related papers (2020-02-03T18:41:08Z) - Estimating and abstracting the 3D structure of bones using neural
networks on X-ray (2D) images [0.0]
We present a deep-learning based method for estimating the 3D structure of a bone from a pair of 2D X-ray images.
Our predictions have an average root mean square (RMS) distance of 1.08 mm between the predicted and true shapes, making it more accurate than the average error achieved by eight other examined 3D bone reconstruction approaches.
arXiv Detail & Related papers (2020-01-16T20:41:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.