SCREENet: A Multi-view Deep Convolutional Neural Network for
Classification of High-resolution Synthetic Mammographic Screening Scans
- URL: http://arxiv.org/abs/2009.08563v3
- Date: Fri, 25 Sep 2020 19:36:05 GMT
- Title: SCREENet: A Multi-view Deep Convolutional Neural Network for
Classification of High-resolution Synthetic Mammographic Screening Scans
- Authors: Saeed Seyyedi, Margaret J. Wong, Debra M. Ikeda, Curtis P. Langlotz
- Abstract summary: We develop and evaluate a multi-view deep learning approach to the analysis of high-resolution synthetic mammograms.
We assess the effect on accuracy of image resolution and training set size.
- Score: 3.8137985834223502
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Purpose: To develop and evaluate the accuracy of a multi-view deep learning
approach to the analysis of high-resolution synthetic mammograms from digital
breast tomosynthesis screening cases, and to assess the effect on accuracy of
image resolution and training set size. Materials and Methods: In a
retrospective study, 21,264 screening digital breast tomosynthesis (DBT) exams
obtained at our institution were collected along with associated radiology
reports. The 2D synthetic mammographic images from these exams, with varying
resolutions and data set sizes, were used to train a multi-view deep
convolutional neural network (MV-CNN) to classify screening images into BI-RADS
classes (BI-RADS 0, 1 and 2) before evaluation on a held-out set of exams.
Results: Area under the receiver operating characteristic curve (AUC) for
BI-RADS 0 vs non-BI-RADS 0 class was 0.912 for the MV-CNN trained on the full
dataset. The model obtained accuracy of 84.8%, recall of 95.9% and precision of
95.0%. This AUC value decreased when the same model was trained with 50% and
25% of images (AUC = 0.877, P=0.010 and 0.834, P=0.009 respectively). Also, the
performance dropped when the same model was trained using images that were
under-sampled by 1/2 and 1/4 (AUC = 0.870, P=0.011 and 0.813, P=0.009
respectively).
Conclusion: This deep learning model classified high-resolution synthetic
mammography scans into normal vs needing further workup using tens of thousands
of high-resolution images. Smaller training data sets and lower resolution
images both caused significant decrease in performance.
Related papers
- Unifying Subsampling Pattern Variations for Compressed Sensing MRI with Neural Operators [72.79532467687427]
Compressed Sensing MRI reconstructs images of the body's internal anatomy from undersampled and compressed measurements.
Deep neural networks have shown great potential for reconstructing high-quality images from highly undersampled measurements.
We propose a unified model that is robust to different subsampling patterns and image resolutions in CS-MRI.
arXiv Detail & Related papers (2024-10-05T20:03:57Z) - TotalSegmentator MRI: Sequence-Independent Segmentation of 59 Anatomical Structures in MR images [62.53931644063323]
In this study we extended the capabilities of TotalSegmentator to MR images.
We trained an nnU-Net segmentation algorithm on this dataset and calculated similarity coefficients (Dice) to evaluate the model's performance.
The model significantly outperformed two other publicly available segmentation models (Dice score 0.824 versus 0.762; p0.001 and 0.762 versus 0.542; p)
arXiv Detail & Related papers (2024-05-29T20:15:54Z) - Comparison of retinal regions-of-interest imaged by OCT for the
classification of intermediate AMD [3.0171643773711208]
A total of 15744 B-scans from 269 intermediate AMD patients and 115 normal subjects were used in this study.
For each subset, a convolutional neural network (based on VGG16 architecture and pre-trained on ImageNet) was trained and tested.
The performance of the models was evaluated using the area under the receiver operating characteristic (AUROC), accuracy, sensitivity, and specificity.
arXiv Detail & Related papers (2023-05-04T13:48:55Z) - An Ensemble Method to Automatically Grade Diabetic Retinopathy with
Optical Coherence Tomography Angiography Images [4.640835690336653]
We propose an ensemble method to automatically grade Diabetic retinopathy (DR) images available from Diabetic Retinopathy Analysis Challenge (DRAC) 2022.
First, we adopt the state-of-the-art classification networks, and train them to grade UW- OCTA images with different splits of the available dataset.
Ultimately, we obtain 25 models, of which, the top 16 models are selected and ensembled to generate the final predictions.
arXiv Detail & Related papers (2022-12-12T22:06:47Z) - FundusQ-Net: a Regression Quality Assessment Deep Learning Algorithm for
Fundus Images Quality Grading [0.0]
Glaucoma, diabetic retinopathy and age-related macular degeneration are major causes of blindness and vision impairment.
Key step in this process is to automatically estimate the quality of the fundus images to make sure these are interpretable by a human operator or a machine learning model.
We present a novel fundus image quality scale and deep learning (DL) model that can estimate fundus image quality relative to this new scale.
arXiv Detail & Related papers (2022-05-02T21:01:34Z) - Development of an algorithm for medical image segmentation of bone
tissue in interaction with metallic implants [58.720142291102135]
This study develops an algorithm for calculating bone growth in contact with metallic implants.
Bone and implant tissue were manually segmented in the training data set.
In terms of network accuracy, the model reached around 98%.
arXiv Detail & Related papers (2022-04-22T08:17:20Z) - The Report on China-Spain Joint Clinical Testing for Rapid COVID-19 Risk
Screening by Eye-region Manifestations [59.48245489413308]
We developed and tested a COVID-19 rapid prescreening model using the eye-region images captured in China and Spain with cellphone cameras.
The performance was measured using area under receiver-operating-characteristic curve (AUC), sensitivity, specificity, accuracy, and F1.
arXiv Detail & Related papers (2021-09-18T02:28:01Z) - Vision Transformers for femur fracture classification [59.99241204074268]
The Vision Transformer (ViT) was able to correctly predict 83% of the test images.
Good results were obtained in sub-fractures with the largest and richest dataset ever.
arXiv Detail & Related papers (2021-08-07T10:12:42Z) - Classification of Fracture and Normal Shoulder Bone X-Ray Images Using
Ensemble and Transfer Learning With Deep Learning Models Based on
Convolutional Neural Networks [0.0]
Various reasons cause shoulder fractures to occur, an area with wider and more varied range of movement than other joints in body.
Images in digital imaging and communications in medicine (DICOM) format are generated for shoulder via Xradiation (Xray), magnetic resonance imaging (MRI) or computed tomography (CT) devices.
Shoulder bone Xray images were classified and compared via deep learning models based on convolutional neural network (CNN) using transfer learning and ensemble learning.
arXiv Detail & Related papers (2021-01-31T19:20:04Z) - Classification of COVID-19 in CT Scans using Multi-Source Transfer
Learning [91.3755431537592]
We propose the use of Multi-Source Transfer Learning to improve upon traditional Transfer Learning for the classification of COVID-19 from CT scans.
With our multi-source fine-tuning approach, our models outperformed baseline models fine-tuned with ImageNet.
Our best performing model was able to achieve an accuracy of 0.893 and a Recall score of 0.897, outperforming its baseline Recall score by 9.3%.
arXiv Detail & Related papers (2020-09-22T11:53:06Z) - A Deep Learning-Based Method for Automatic Segmentation of Proximal
Femur from Quantitative Computed Tomography Images [5.731199807877257]
We developed a 3D image segmentation method based V on-Net, an end-to-end fully convolutional neural network (CNN)
We performed experiments to evaluate the effectiveness of the proposed segmentation method.
arXiv Detail & Related papers (2020-06-09T21:16:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.