OCTolyzer: Fully automatic analysis toolkit for segmentation and feature extracting in optical coherence tomography (OCT) and scanning laser ophthalmoscopy (SLO) data
- URL: http://arxiv.org/abs/2407.14128v1
- Date: Fri, 19 Jul 2024 08:56:12 GMT
- Title: OCTolyzer: Fully automatic analysis toolkit for segmentation and feature extracting in optical coherence tomography (OCT) and scanning laser ophthalmoscopy (SLO) data
- Authors: Jamie Burke, Justin Engelmann, Samuel Gibbon, Charlene Hamid, Diana Moukaddem, Dan Pugh, Tariq Farrah, Niall Strang, Neeraj Dhaun, Tom MacGillivray, Stuart King, Ian J. C. MacCormick,
- Abstract summary: OCTolyzer is an open-source toolkit for retinochoroidal analysis in optical coherence tomography ( OCT) and scanning laser ophthalmoscopy (SLO) images.
- Score: 3.8485899972356337
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Purpose: To describe OCTolyzer: an open-source toolkit for retinochoroidal analysis in optical coherence tomography (OCT) and scanning laser ophthalmoscopy (SLO) images. Method: OCTolyzer has two analysis suites, for SLO and OCT images. The former enables anatomical segmentation and feature measurement of the en face retinal vessels. The latter leverages image metadata for retinal layer segmentations and deep learning-based choroid layer segmentation to compute retinochoroidal measurements such as thickness and volume. We introduce OCTolyzer and assess the reproducibility of its OCT analysis suite for choroid analysis. Results: At the population-level, choroid region metrics were highly reproducible (Mean absolute error/Pearson/Spearman correlation for macular volume choroid thickness (CT):6.7$\mu$m/0.9933/0.9969, macular B-scan CT:11.6$\mu$m/0.9858/0.9889, peripapillary CT:5.0$\mu$m/0.9942/0.9940). Macular choroid vascular index (CVI) had good reproducibility (volume CVI:0.0271/0.9669/0.9655, B-scan CVI:0.0130/0.9090/0.9145). At the eye-level, measurement error in regional and vessel metrics were below 5% and 20% of the population's variability, respectively. Major outliers were from poor quality B-scans with thick choroids and invisible choroid-sclera boundary. Conclusions: OCTolyzer is the first open-source pipeline to convert OCT/SLO data into reproducible and clinically meaningful retinochoroidal measurements. OCT processing on a standard laptop CPU takes under 2 seconds for macular or peripapillary B-scans and 85 seconds for volume scans. OCTolyzer can help improve standardisation in the field of OCT/SLO image analysis and is freely available here: https://github.com/jaburke166/OCTolyzer.
Related papers
- SMILE-UHURA Challenge -- Small Vessel Segmentation at Mesoscopic Scale from Ultra-High Resolution 7T Magnetic Resonance Angiograms [60.35639972035727]
The lack of publicly available annotated datasets has impeded the development of robust, machine learning-driven segmentation algorithms.
The SMILE-UHURA challenge addresses the gap in publicly available annotated datasets by providing an annotated dataset of Time-of-Flight angiography acquired with 7T MRI.
Dice scores reached up to 0.838 $pm$ 0.066 and 0.716 $pm$ 0.125 on the respective datasets, with an average performance of up to 0.804 $pm$ 0.15.
arXiv Detail & Related papers (2024-11-14T17:06:00Z) - SLOctolyzer: Fully automatic analysis toolkit for segmentation and feature extracting in scanning laser ophthalmoscopy images [4.205028392035434]
The purpose of this study was to introduce SLOctolyzer: an open-source analysis for en face retinal vessels in infrared reflectance laser scanning (SLO) images.
The segmentation module uses deep learning methods to delineate retinal anatomy, and detects the fovea and optic disc.
The measurement module quantifies the complexity, density, tortuosity, and calibre of the segmented retinal vessels.
arXiv Detail & Related papers (2024-06-24T09:16:17Z) - Domain-specific augmentations with resolution agnostic self-attention mechanism improves choroid segmentation in optical coherence tomography images [3.8485899972356337]
The choroid is a key vascular layer of the eye, supplying oxygen to the retinal photoreceptors.
Current methods to measure the choroid often require use of multiple, independent semi-automatic and deep learning-based algorithms.
We propose a Robust, Resolution-agnostic and Efficient Attention-based network for CHoroid segmentation (REACH)
arXiv Detail & Related papers (2024-05-23T11:35:23Z) - Deep learning network to correct axial and coronal eye motion in 3D OCT
retinal imaging [65.47834983591957]
We propose deep learning based neural networks to correct axial and coronal motion artifacts in OCT based on a single scan.
The experimental result shows that the proposed method can effectively correct motion artifacts and achieve smaller error than other methods.
arXiv Detail & Related papers (2023-05-27T03:55:19Z) - nnUNet RASPP for Retinal OCT Fluid Detection, Segmentation and
Generalisation over Variations of Data Sources [25.095695898777656]
We propose two variants of the nnUNet with consistent high performance across images from multiple device vendors.
The algorithm was validated on the MICCAI 2017 RETOUCH challenge dataset.
Experimental results show that our algorithms outperform the current state-of-the-arts algorithms.
arXiv Detail & Related papers (2023-02-25T23:47:23Z) - Are Macula or Optic Nerve Head Structures better at Diagnosing Glaucoma?
An Answer using AI and Wide-Field Optical Coherence Tomography [48.7576911714538]
We developed a deep learning algorithm to automatically segment structures of the optic nerve head (ONH) and macula in 3D wide-field OCT scans.
Our classification algorithm was able to segment ONH and macular tissues with a DC of 0.94 $pm$ 0.003.
This may encourage the mainstream adoption of 3D wide-field OCT scans.
arXiv Detail & Related papers (2022-10-13T01:51:29Z) - Lymphocyte Classification in Hyperspectral Images of Ovarian Cancer
Tissue Biopsy Samples [94.37521840642141]
We present a machine learning pipeline to segment white blood cell pixels in hyperspectral images of biopsy cores.
These cells are clinically important for diagnosis, but some prior work has struggled to incorporate them due to difficulty obtaining precise pixel labels.
arXiv Detail & Related papers (2022-03-23T00:58:27Z) - A novel optical needle probe for deep learning-based tissue elasticity
characterization [59.698811329287174]
Optical coherence elastography (OCE) probes have been proposed for needle insertions but have so far lacked the necessary load sensing capabilities.
We present a novel OCE needle probe that provides simultaneous optical coherence tomography ( OCT) imaging and load sensing at the needle tip.
arXiv Detail & Related papers (2021-09-20T08:29:29Z) - Spectral-Spatial Recurrent-Convolutional Networks for In-Vivo
Hyperspectral Tumor Type Classification [49.32653090178743]
We demonstrate the feasibility of in-vivo tumor type classification using hyperspectral imaging and deep learning.
Our best model achieves an AUC of 76.3%, significantly outperforming previous conventional and deep learning methods.
arXiv Detail & Related papers (2020-07-02T12:00:53Z) - Exploiting the Transferability of Deep Learning Systems Across
Multi-modal Retinal Scans for Extracting Retinopathy Lesions [11.791160309522013]
This paper presents a detailed evaluation of semantic segmentation, scene parsing and hybrid deep learning systems for extracting the retinal lesions.
We present a novel strategy exploiting the transferability of these models across multiple retinal scanner specifications.
Overall, a hybrid retinal analysis and grading network (RAGNet), backboned through ResNet-50, stood first for extracting the retinal lesions.
arXiv Detail & Related papers (2020-06-04T06:25:25Z) - Automated segmentation of retinal fluid volumes from structural and
angiographic optical coherence tomography using deep learning [2.041049231600541]
We proposed a deep convolutional neural network (CNN) named Retinal Fluid Network (ReF-Net) to segment volumetric retinal fluid on optical coherence tomography ( OCT) volume.
Cross-sectional OCT and angiography ( OCTA) scans were used for training and testing ReF-Net.
ReF-Net shows high accuracy (F1 = 0.864 +/- 0.084) in retinal fluid segmentation.
arXiv Detail & Related papers (2020-06-03T22:55:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.