Comparison of retinal regions-of-interest imaged by OCT for the
classification of intermediate AMD
- URL: http://arxiv.org/abs/2305.02832v2
- Date: Fri, 14 Jul 2023 09:31:18 GMT
- Title: Comparison of retinal regions-of-interest imaged by OCT for the
classification of intermediate AMD
- Authors: Danilo A. Jesus, Eric F. Thee, Tim Doekemeijer, Daniel Luttikhuizen,
Caroline Klaver, Stefan Klein, Theo van Walsum, Hans Vingerling, Luisa
Sanchez
- Abstract summary: A total of 15744 B-scans from 269 intermediate AMD patients and 115 normal subjects were used in this study.
For each subset, a convolutional neural network (based on VGG16 architecture and pre-trained on ImageNet) was trained and tested.
The performance of the models was evaluated using the area under the receiver operating characteristic (AUROC), accuracy, sensitivity, and specificity.
- Score: 3.0171643773711208
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: To study whether it is possible to differentiate intermediate age-related
macular degeneration (AMD) from healthy controls using partial optical
coherence tomography (OCT) data, that is, restricting the input B-scans to
certain pre-defined regions of interest (ROIs). A total of 15744 B-scans from
269 intermediate AMD patients and 115 normal subjects were used in this study
(split on subject level in 80% train, 10% validation and 10% test). From each
OCT B-scan, three ROIs were extracted: retina, complex between retinal pigment
epithelium (RPE) and Bruch membrane (BM), and choroid (CHO). These ROIs were
obtained using two different methods: masking and cropping. In addition to the
six ROIs, the whole OCT B-scan and the binary mask corresponding to the
segmentation of the RPE-BM complex were used. For each subset, a convolutional
neural network (based on VGG16 architecture and pre-trained on ImageNet) was
trained and tested. The performance of the models was evaluated using the area
under the receiver operating characteristic (AUROC), accuracy, sensitivity, and
specificity. All trained models presented an AUROC, accuracy, sensitivity, and
specificity equal to or higher than 0.884, 0.816, 0.685, and 0.644,
respectively. The model trained on the whole OCT B-scan presented the best
performance (AUROC = 0.983, accuracy = 0.927, sensitivity = 0.862, specificity
= 0.913). The models trained on the ROIs obtained with the cropping method led
to significantly higher outcomes than those obtained with masking, with the
exception of the retinal tissue, where no statistically significant difference
was observed between cropping and masking (p = 0.47). This study demonstrated
that while using the complete OCT B-scan provided the highest accuracy in
classifying intermediate AMD, models trained on specific ROIs such as the
RPE-BM complex or the choroid can still achieve high performance.
Related papers
- TotalSegmentator MRI: Sequence-Independent Segmentation of 59 Anatomical Structures in MR images [62.53931644063323]
In this study we extended the capabilities of TotalSegmentator to MR images.
We trained an nnU-Net segmentation algorithm on this dataset and calculated similarity coefficients (Dice) to evaluate the model's performance.
The model significantly outperformed two other publicly available segmentation models (Dice score 0.824 versus 0.762; p0.001 and 0.762 versus 0.542; p)
arXiv Detail & Related papers (2024-05-29T20:15:54Z) - Attention-based Saliency Maps Improve Interpretability of Pneumothorax
Classification [52.77024349608834]
To investigate chest radiograph (CXR) classification performance of vision transformers (ViT) and interpretability of attention-based saliency.
ViTs were fine-tuned for lung disease classification using four public data sets: CheXpert, Chest X-Ray 14, MIMIC CXR, and VinBigData.
ViTs had comparable CXR classification AUCs compared with state-of-the-art CNNs.
arXiv Detail & Related papers (2023-03-03T12:05:41Z) - TotalSegmentator: robust segmentation of 104 anatomical structures in CT
images [48.50994220135258]
We present a deep learning segmentation model for body CT images.
The model can segment 104 anatomical structures relevant for use cases such as organ volumetry, disease characterization, and surgical or radiotherapy planning.
arXiv Detail & Related papers (2022-08-11T15:16:40Z) - 3D Structural Analysis of the Optic Nerve Head to Robustly Discriminate
Between Papilledema and Optic Disc Drusen [44.754910718620295]
We developed a deep learning algorithm to identify major tissue structures of the optic nerve head (ONH) in 3D optical coherence tomography ( OCT) scans.
A classification algorithm was designed using 150 OCT volumes to perform 3-class classifications (1: ODD, 2: papilledema, 3: healthy) strictly from their drusen and prelamina swelling scores.
Our AI approach accurately discriminated ODD from papilledema, using a single OCT scan.
arXiv Detail & Related papers (2021-12-18T17:05:53Z) - Osteoporosis Prescreening using Panoramic Radiographs through a Deep
Convolutional Neural Network with Attention Mechanism [65.70943212672023]
Deep convolutional neural network (CNN) with an attention module can detect osteoporosis on panoramic radiographs.
dataset of 70 panoramic radiographs (PRs) from 70 different subjects of age between 49 to 60 was used.
arXiv Detail & Related papers (2021-10-19T00:03:57Z) - Vision Transformers for femur fracture classification [59.99241204074268]
The Vision Transformer (ViT) was able to correctly predict 83% of the test images.
Good results were obtained in sub-fractures with the largest and richest dataset ever.
arXiv Detail & Related papers (2021-08-07T10:12:42Z) - Semi-supervised learning for generalizable intracranial hemorrhage
detection and segmentation [0.0]
We develop and evaluate a semisupervised learning model for intracranial hemorrhage detection and segmentation on an outofdistribution head CT evaluation set.
An initial "teacher" deep learning model was trained on 457 pixel-labeled head CT scans collected from one US institution from 2010-2017.
A second "student" model was trained on this combined pixel-labeled and pseudo-labeled dataset.
arXiv Detail & Related papers (2021-05-03T00:14:43Z) - Classification of Fracture and Normal Shoulder Bone X-Ray Images Using
Ensemble and Transfer Learning With Deep Learning Models Based on
Convolutional Neural Networks [0.0]
Various reasons cause shoulder fractures to occur, an area with wider and more varied range of movement than other joints in body.
Images in digital imaging and communications in medicine (DICOM) format are generated for shoulder via Xradiation (Xray), magnetic resonance imaging (MRI) or computed tomography (CT) devices.
Shoulder bone Xray images were classified and compared via deep learning models based on convolutional neural network (CNN) using transfer learning and ensemble learning.
arXiv Detail & Related papers (2021-01-31T19:20:04Z) - SCREENet: A Multi-view Deep Convolutional Neural Network for
Classification of High-resolution Synthetic Mammographic Screening Scans [3.8137985834223502]
We develop and evaluate a multi-view deep learning approach to the analysis of high-resolution synthetic mammograms.
We assess the effect on accuracy of image resolution and training set size.
arXiv Detail & Related papers (2020-09-18T00:12:33Z) - A Deep Learning-Based Method for Automatic Segmentation of Proximal
Femur from Quantitative Computed Tomography Images [5.731199807877257]
We developed a 3D image segmentation method based V on-Net, an end-to-end fully convolutional neural network (CNN)
We performed experiments to evaluate the effectiveness of the proposed segmentation method.
arXiv Detail & Related papers (2020-06-09T21:16:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.