OCTolyzer: Fully automatic toolkit for segmentation and feature extracting in optical coherence tomography and scanning laser ophthalmoscopy data
- URL: http://arxiv.org/abs/2407.14128v2
- Date: Mon, 13 Jan 2025 12:23:55 GMT
- Title: OCTolyzer: Fully automatic toolkit for segmentation and feature extracting in optical coherence tomography and scanning laser ophthalmoscopy data
- Authors: Jamie Burke, Justin Engelmann, Samuel Gibbon, Charlene Hamid, Diana Moukaddem, Dan Pugh, Tariq Farrah, Niall Strang, Neeraj Dhaun, Tom MacGillivray, Stuart King, Ian J. C. MacCormick,
- Abstract summary: OCTolyzer is the first open-source toolkit for retinochoroidal analysis in OCT/SLO data.<n>It features two analysis suites for OCT and SLO data, facilitating deep learning-based anatomical segmentation.<n>It can convert OCT/SLO data into reproducible and clinically meaningful retinochoroidal features.
- Score: 3.8485899972356337
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Optical coherence tomography (OCT) and scanning laser ophthalmoscopy (SLO) of the eye has become essential to ophthalmology and the emerging field of oculomics, thus requiring a need for transparent, reproducible, and rapid analysis of this data for clinical research and the wider research community. Here, we introduce OCTolyzer, the first open-source toolkit for retinochoroidal analysis in OCT/SLO data. It features two analysis suites for OCT and SLO data, facilitating deep learning-based anatomical segmentation and feature extraction of the cross-sectional retinal and choroidal layers and en face retinal vessels. We describe OCTolyzer and evaluate the reproducibility of its OCT choroid analysis. At the population level, metrics for choroid region thickness were highly reproducible, with a mean absolute error (MAE)/Pearson correlation for macular volume choroid thickness (CT) of 6.7$\mu$m/0.99, macular B-scan CT of 11.6$\mu$m/0.99, and peripapillary CT of 5.0$\mu$m/0.99. Macular choroid vascular index (CVI) also showed strong reproducibility, with MAE/Pearson for volume CVI yielding 0.0271/0.97 and B-scan CVI 0.0130/0.91. At the eye level, measurement noise for regional and vessel metrics was below 5% and 20% of the population's variability, respectively. Outliers were caused by poor-quality B-scans with thick choroids and invisible choroid-sclera boundary. Processing times on a laptop CPU were under three seconds for macular/peripapillary B-scans and 85 seconds for volume scans. OCTolyzer can convert OCT/SLO data into reproducible and clinically meaningful retinochoroidal features and will improve the standardisation of ocular measurements in OCT/SLO image analysis, requiring no specialised training or proprietary software to be used. OCTolyzer is freely available here: https://github.com/jaburke166/OCTolyzer.
Related papers
- SLOctolyzer: Fully automatic analysis toolkit for segmentation and feature extracting in scanning laser ophthalmoscopy images [4.205028392035434]
The purpose of this study was to introduce SLOctolyzer: an open-source analysis for en face retinal vessels in infrared reflectance laser scanning (SLO) images.
The segmentation module uses deep learning methods to delineate retinal anatomy, and detects the fovea and optic disc.
The measurement module quantifies the complexity, density, tortuosity, and calibre of the segmented retinal vessels.
arXiv Detail & Related papers (2024-06-24T09:16:17Z) - Domain-specific augmentations with resolution agnostic self-attention mechanism improves choroid segmentation in optical coherence tomography images [3.8485899972356337]
The choroid is a key vascular layer of the eye, supplying oxygen to the retinal photoreceptors.
Current methods to measure the choroid often require use of multiple, independent semi-automatic and deep learning-based algorithms.
We propose a Robust, Resolution-agnostic and Efficient Attention-based network for CHoroid segmentation (REACH)
arXiv Detail & Related papers (2024-05-23T11:35:23Z) - Tissue Segmentation of Thick-Slice Fetal Brain MR Scans with Guidance
from High-Quality Isotropic Volumes [52.242103848335354]
We propose a novel Cycle-Consistent Domain Adaptation Network (C2DA-Net) to efficiently transfer the knowledge learned from high-quality isotropic volumes for accurate tissue segmentation of thick-slice scans.
Our C2DA-Net can fully utilize a small set of annotated isotropic volumes to guide tissue segmentation on unannotated thick-slice scans.
arXiv Detail & Related papers (2023-08-13T12:51:15Z) - Deep learning network to correct axial and coronal eye motion in 3D OCT
retinal imaging [65.47834983591957]
We propose deep learning based neural networks to correct axial and coronal motion artifacts in OCT based on a single scan.
The experimental result shows that the proposed method can effectively correct motion artifacts and achieve smaller error than other methods.
arXiv Detail & Related papers (2023-05-27T03:55:19Z) - Deep-Learning-based Vasculature Extraction for Single-Scan Optical
Coherence Tomography Angiography [9.77526300425824]
We propose a vasculature extraction pipeline that uses only one-repeated OCT scan to generate OCTA images.
The pipeline is based on the proposed Vasculature Extraction Transformer (VET), which leverages convolutional projection to better learn the spatial relationships between image patches.
arXiv Detail & Related papers (2023-04-17T13:55:26Z) - O2CTA: Introducing Annotations from OCT to CCTA in Coronary Plaque
Analysis [19.099761377777412]
Coronary CT angiography (CCTA) is widely used for artery imaging and determining the stenosis degree.
It can be settled by invasive optical coherence tomography ( OCT) without much trouble for physicians, but bringing higher costs and potential risks to patients.
We propose a method to handle the O2CTA problem. CCTA scans are first reconstructed into multi-planar reformatted (MPR) images, which agree with OCT images in term of semantic contents.
The artery segment in OCT, which is manually labelled, is then spatially aligned with the entire artery in MPR images via the proposed alignment strategy.
arXiv Detail & Related papers (2023-03-11T09:40:05Z) - nnUNet RASPP for Retinal OCT Fluid Detection, Segmentation and
Generalisation over Variations of Data Sources [25.095695898777656]
We propose two variants of the nnUNet with consistent high performance across images from multiple device vendors.
The algorithm was validated on the MICCAI 2017 RETOUCH challenge dataset.
Experimental results show that our algorithms outperform the current state-of-the-arts algorithms.
arXiv Detail & Related papers (2023-02-25T23:47:23Z) - Affinity Feature Strengthening for Accurate, Complete and Robust Vessel
Segmentation [48.638327652506284]
Vessel segmentation is crucial in many medical image applications, such as detecting coronary stenoses, retinal vessel diseases and brain aneurysms.
We present a novel approach, the affinity feature strengthening network (AFN), which jointly models geometry and refines pixel-wise segmentation features using a contrast-insensitive, multiscale affinity approach.
arXiv Detail & Related papers (2022-11-12T05:39:17Z) - Are Macula or Optic Nerve Head Structures better at Diagnosing Glaucoma?
An Answer using AI and Wide-Field Optical Coherence Tomography [48.7576911714538]
We developed a deep learning algorithm to automatically segment structures of the optic nerve head (ONH) and macula in 3D wide-field OCT scans.
Our classification algorithm was able to segment ONH and macular tissues with a DC of 0.94 $pm$ 0.003.
This may encourage the mainstream adoption of 3D wide-field OCT scans.
arXiv Detail & Related papers (2022-10-13T01:51:29Z) - Lymphocyte Classification in Hyperspectral Images of Ovarian Cancer
Tissue Biopsy Samples [94.37521840642141]
We present a machine learning pipeline to segment white blood cell pixels in hyperspectral images of biopsy cores.
These cells are clinically important for diagnosis, but some prior work has struggled to incorporate them due to difficulty obtaining precise pixel labels.
arXiv Detail & Related papers (2022-03-23T00:58:27Z) - A novel optical needle probe for deep learning-based tissue elasticity
characterization [59.698811329287174]
Optical coherence elastography (OCE) probes have been proposed for needle insertions but have so far lacked the necessary load sensing capabilities.
We present a novel OCE needle probe that provides simultaneous optical coherence tomography ( OCT) imaging and load sensing at the needle tip.
arXiv Detail & Related papers (2021-09-20T08:29:29Z) - Spectral-Spatial Recurrent-Convolutional Networks for In-Vivo
Hyperspectral Tumor Type Classification [49.32653090178743]
We demonstrate the feasibility of in-vivo tumor type classification using hyperspectral imaging and deep learning.
Our best model achieves an AUC of 76.3%, significantly outperforming previous conventional and deep learning methods.
arXiv Detail & Related papers (2020-07-02T12:00:53Z) - Exploiting the Transferability of Deep Learning Systems Across
Multi-modal Retinal Scans for Extracting Retinopathy Lesions [11.791160309522013]
This paper presents a detailed evaluation of semantic segmentation, scene parsing and hybrid deep learning systems for extracting the retinal lesions.
We present a novel strategy exploiting the transferability of these models across multiple retinal scanner specifications.
Overall, a hybrid retinal analysis and grading network (RAGNet), backboned through ResNet-50, stood first for extracting the retinal lesions.
arXiv Detail & Related papers (2020-06-04T06:25:25Z) - Automated segmentation of retinal fluid volumes from structural and
angiographic optical coherence tomography using deep learning [2.041049231600541]
We proposed a deep convolutional neural network (CNN) named Retinal Fluid Network (ReF-Net) to segment volumetric retinal fluid on optical coherence tomography ( OCT) volume.
Cross-sectional OCT and angiography ( OCTA) scans were used for training and testing ReF-Net.
ReF-Net shows high accuracy (F1 = 0.864 +/- 0.084) in retinal fluid segmentation.
arXiv Detail & Related papers (2020-06-03T22:55:47Z) - Microvasculature Segmentation and Inter-capillary Area Quantification of
the Deep Vascular Complex using Transfer Learning [0.0]
We demonstrate accurate segmentation of the superficial superficial vascular complex and deep vascular plexus using a convolutional neural network (CNN) for quantitative analysis.
We used transfer learning from a CNN trained on 76 images from smaller FOVs of the SCP acquired using different OCT systems.
arXiv Detail & Related papers (2020-03-19T22:27:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.