Pointwise visual field estimation from optical coherence tomography in
glaucoma: a structure-function analysis using deep learning
- URL: http://arxiv.org/abs/2106.03793v1
- Date: Mon, 7 Jun 2021 16:58:38 GMT
- Title: Pointwise visual field estimation from optical coherence tomography in
glaucoma: a structure-function analysis using deep learning
- Authors: Ruben Hemelings, Bart Elen, Jo\~ao Barbosa Breda, Erwin Bellon,
Matthew B Blaschko, Patrick De Boever, Ingeborg Stalmans
- Abstract summary: Standard Automated Perimetry (SAP) is the gold standard to monitor visual field (VF) loss in glaucoma management.
We developed and validated a deep learning (DL) regression model that estimates pointwise and overall VF loss from unsegmented optical coherence tomography ( OCT) scans.
- Score: 12.70143462176992
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Background/Aims: Standard Automated Perimetry (SAP) is the gold standard to
monitor visual field (VF) loss in glaucoma management, but is prone to
intra-subject variability. We developed and validated a deep learning (DL)
regression model that estimates pointwise and overall VF loss from unsegmented
optical coherence tomography (OCT) scans. Methods: Eight DL regression models
were trained with various retinal imaging modalities: circumpapillary OCT at
3.5mm, 4.1mm, 4.7mm diameter, and scanning laser ophthalmoscopy (SLO) en face
images to estimate mean deviation (MD) and 52 threshold values. This
retrospective study used data from patients who underwent a complete glaucoma
examination, including a reliable Humphrey Field Analyzer (HFA) 24-2 SITA
Standard VF exam and a SPECTRALIS OCT scan using the Glaucoma Module Premium
Edition. Results: A total of 1378 matched OCT-VF pairs of 496 patients (863
eyes) were included for training and evaluation of the DL models. Average
sample MD was -7.53dB (from -33.8dB to +2.0dB). For 52 VF threshold values
estimation, the circumpapillary OCT scan with the largest radius (4.7mm)
achieved the best performance among all individual models (Pearson r=0.77, 95%
CI=[0.72-0.82]). For MD, prediction averaging of OCT-trained models (3.5mm,
4.1mm, 4.7mm) resulted in a Pearson r of 0.78 [0.73-0.83] on the validation set
and comparable performance on the test set (Pearson r=0.79 [0.75-0.82]).
Conclusion: DL on unsegmented OCT scans accurately predicts pointwise and mean
deviation of 24-2 VF in glaucoma patients. Automated VF from unsegmented OCT
could be a solution for patients unable to produce reliable perimetry results.
Related papers
- Multi-centric AI Model for Unruptured Intracranial Aneurysm Detection and Volumetric Segmentation in 3D TOF-MRI [6.397650339311053]
We developed an open-source nnU-Net-based AI model for combined detection and segmentation of unruptured intracranial aneurysms (UICA) in 3D TOF-MRI.
Four distinct training datasets were created, and the nnU-Net framework was used for model development.
The primary model showed 85% sensitivity and 0.23 FP/case rate, outperforming the ADAM-challenge winner (61%) and a nnU-Net trained on ADAM data (51%) in sensitivity.
arXiv Detail & Related papers (2024-08-30T08:57:04Z) - Corneal endothelium assessment in specular microscopy images with Fuchs'
dystrophy via deep regression of signed distance maps [48.498376125522114]
This paper proposes a UNet-based segmentation approach that requires minimal post-processing.
It achieves reliable CE morphometric assessment and guttae identification across all degrees of Fuchs' dystrophy.
arXiv Detail & Related papers (2022-10-13T15:34:20Z) - 3D Structural Analysis of the Optic Nerve Head to Robustly Discriminate
Between Papilledema and Optic Disc Drusen [44.754910718620295]
We developed a deep learning algorithm to identify major tissue structures of the optic nerve head (ONH) in 3D optical coherence tomography ( OCT) scans.
A classification algorithm was designed using 150 OCT volumes to perform 3-class classifications (1: ODD, 2: papilledema, 3: healthy) strictly from their drusen and prelamina swelling scores.
Our AI approach accurately discriminated ODD from papilledema, using a single OCT scan.
arXiv Detail & Related papers (2021-12-18T17:05:53Z) - Osteoporosis Prescreening using Panoramic Radiographs through a Deep
Convolutional Neural Network with Attention Mechanism [65.70943212672023]
Deep convolutional neural network (CNN) with an attention module can detect osteoporosis on panoramic radiographs.
dataset of 70 panoramic radiographs (PRs) from 70 different subjects of age between 49 to 60 was used.
arXiv Detail & Related papers (2021-10-19T00:03:57Z) - Deep learning-based detection of intravenous contrast in computed
tomography scans [0.7313653675718069]
Identifying intravenous (IV) contrast use within CT scans is a key component of data curation for model development and testing.
We developed and validated a CNN-based deep learning platform to identify IV contrast within CT scans.
arXiv Detail & Related papers (2021-10-16T00:46:45Z) - Multi-institutional Validation of Two-Streamed Deep Learning Method for
Automated Delineation of Esophageal Gross Tumor Volume using planning-CT and
FDG-PETCT [14.312659667401302]
Current clinical workflow for esophageal gross tumor volume (GTV) contouring relies on manual delineation of high labor-costs and interuser variability.
To validate the clinical applicability of a deep learning (DL) multi-modality esophageal GTV contouring model, developed at 1 institution whereas tested at multiple ones.
arXiv Detail & Related papers (2021-10-11T13:56:09Z) - Vision Transformers for femur fracture classification [59.99241204074268]
The Vision Transformer (ViT) was able to correctly predict 83% of the test images.
Good results were obtained in sub-fractures with the largest and richest dataset ever.
arXiv Detail & Related papers (2021-08-07T10:12:42Z) - Semi-supervised learning for generalizable intracranial hemorrhage
detection and segmentation [0.0]
We develop and evaluate a semisupervised learning model for intracranial hemorrhage detection and segmentation on an outofdistribution head CT evaluation set.
An initial "teacher" deep learning model was trained on 457 pixel-labeled head CT scans collected from one US institution from 2010-2017.
A second "student" model was trained on this combined pixel-labeled and pseudo-labeled dataset.
arXiv Detail & Related papers (2021-05-03T00:14:43Z) - FLANNEL: Focal Loss Based Neural Network Ensemble for COVID-19 Detection [61.04937460198252]
We construct the X-ray imaging data from 2874 patients with four classes: normal, bacterial pneumonia, non-COVID-19 viral pneumonia, and COVID-19.
To identify COVID-19, we propose a Focal Loss Based Neural Ensemble Network (FLANNEL)
FLANNEL consistently outperforms baseline models on COVID-19 identification task in all metrics.
arXiv Detail & Related papers (2020-10-30T03:17:31Z) - Accurate Prostate Cancer Detection and Segmentation on Biparametric MRI
using Non-local Mask R-CNN with Histopathological Ground Truth [0.0]
We developed deep machine learning models to improve the detection and segmentation of intraprostatic lesions on bp-MRI.
Models were trained using MRI-based delineations with prostatectomy-based delineations.
With prostatectomy-based delineations, the non-local Mask R-CNN with fine-tuning and self-training significantly improved all evaluation metrics.
arXiv Detail & Related papers (2020-10-28T21:07:09Z) - Segmentation of the Myocardium on Late-Gadolinium Enhanced MRI based on
2.5 D Residual Squeeze and Excitation Deep Learning Model [55.09533240649176]
The aim of this work is to develop an accurate automatic segmentation method based on deep learning models for the myocardial borders on LGE-MRI.
A total number of 320 exams (with a mean number of 6 slices per exam) were used for training and 28 exams used for testing.
The performance analysis of the proposed ensemble model in the basal and middle slices was similar as compared to intra-observer study and slightly lower at apical slices.
arXiv Detail & Related papers (2020-05-27T20:44:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.