Spatial-aware Transformer-GRU Framework for Enhanced Glaucoma Diagnosis
from 3D OCT Imaging
- URL: http://arxiv.org/abs/2403.05702v1
- Date: Fri, 8 Mar 2024 22:25:15 GMT
- Title: Spatial-aware Transformer-GRU Framework for Enhanced Glaucoma Diagnosis
from 3D OCT Imaging
- Authors: Mona Ashtari-Majlan, Mohammad Mahdi Dehshibi, David Masip
- Abstract summary: We present a novel deep learning framework that leverages the diagnostic value of 3D Optical Coherence Tomography ( OCT) imaging for automated glaucoma detection.
We integrate a pre-trained Vision Transformer on retinal data for rich slice-wise feature extraction and a bidirectional Gated Recurrent Unit for capturing inter-slice spatial dependencies.
Experimental results on a large dataset demonstrate the superior performance of the proposed method over state-of-the-art ones.
- Score: 1.8416014644193066
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Glaucoma, a leading cause of irreversible blindness, necessitates early
detection for accurate and timely intervention to prevent irreversible vision
loss. In this study, we present a novel deep learning framework that leverages
the diagnostic value of 3D Optical Coherence Tomography (OCT) imaging for
automated glaucoma detection. In this framework, we integrate a pre-trained
Vision Transformer on retinal data for rich slice-wise feature extraction and a
bidirectional Gated Recurrent Unit for capturing inter-slice spatial
dependencies. This dual-component approach enables comprehensive analysis of
local nuances and global structural integrity, crucial for accurate glaucoma
diagnosis. Experimental results on a large dataset demonstrate the superior
performance of the proposed method over state-of-the-art ones, achieving an
F1-score of 93.58%, Matthews Correlation Coefficient (MCC) of 73.54%, and AUC
of 95.24%. The framework's ability to leverage the valuable information in 3D
OCT data holds significant potential for enhancing clinical decision support
systems and improving patient outcomes in glaucoma management.
Related papers
- Super-resolution of biomedical volumes with 2D supervision [84.5255884646906]
Masked slice diffusion for super-resolution exploits the inherent equivalence in the data-generating distribution across all spatial dimensions of biological specimens.
We focus on the application of SliceR to stimulated histology (SRH), characterized by its rapid acquisition of high-resolution 2D images but slow and costly optical z-sectioning.
arXiv Detail & Related papers (2024-04-15T02:41:55Z) - Weakly supervised segmentation of intracranial aneurysms using a novel 3D focal modulation UNet [0.5106162890866905]
We propose FocalSegNet, a novel 3D focal modulation UNet, to detect an aneurysm and offer an initial, coarse segmentation of it from time-of-flight MRA image patches.
We trained and evaluated our model on a public dataset, and in terms of UIA detection, our model showed a low false-positive rate of 0.21 and a high sensitivity of 0.80.
arXiv Detail & Related papers (2023-08-06T03:28:08Z) - Automatic diagnosis of knee osteoarthritis severity using Swin
transformer [55.01037422579516]
Knee osteoarthritis (KOA) is a widespread condition that can cause chronic pain and stiffness in the knee joint.
We propose an automated approach that employs the Swin Transformer to predict the severity of KOA.
arXiv Detail & Related papers (2023-07-10T09:49:30Z) - nnUNet RASPP for Retinal OCT Fluid Detection, Segmentation and
Generalisation over Variations of Data Sources [25.095695898777656]
We propose two variants of the nnUNet with consistent high performance across images from multiple device vendors.
The algorithm was validated on the MICCAI 2017 RETOUCH challenge dataset.
Experimental results show that our algorithms outperform the current state-of-the-arts algorithms.
arXiv Detail & Related papers (2023-02-25T23:47:23Z) - Feature Representation Learning for Robust Retinal Disease Detection
from Optical Coherence Tomography Images [0.0]
Ophthalmic images may contain identical-looking pathologies that can cause failure in automated techniques to distinguish different retinal degenerative diseases.
In this work, we propose a robust disease detection architecture with three learning heads.
Our experimental results on two publicly available OCT datasets illustrate that the proposed model outperforms existing state-of-the-art models in terms of accuracy, interpretability, and robustness for out-of-distribution retinal disease detection.
arXiv Detail & Related papers (2022-06-24T07:59:36Z) - Geometric Deep Learning to Identify the Critical 3D Structural Features
of the Optic Nerve Head for Glaucoma Diagnosis [52.06403518904579]
The optic nerve head (ONH) undergoes complex and deep 3D morphological changes during the development and progression of glaucoma.
We used PointNet and dynamic graph convolutional neural network (DGCNN) to diagnose glaucoma from 3D ONH point clouds.
Our approach may have strong potential to be used in clinical applications for the diagnosis and prognosis of a wide range of ophthalmic disorders.
arXiv Detail & Related papers (2022-04-14T12:52:10Z) - Deep Learning based Framework for Automatic Diagnosis of Glaucoma based
on analysis of Focal Notching in the Optic Nerve Head [0.2580765958706854]
We propose a deep learning-based pipeline for automatic segmentation of optic disc (OD) and optic cup (OC) regions from Digital Fundus Images (DFIs)
This methodology has utilized focal notch analysis of neuroretinal rim along with cup-to-disc ratio values as classifying parameters to enhance the accuracy of Computer-aided design (CAD) systems in analyzing glaucoma.
The proposed pipeline was evaluated on the freely available DRISHTI-GS dataset with a resultant accuracy of 93.33% for detecting Glaucoma from DFIs.
arXiv Detail & Related papers (2021-12-10T18:58:40Z) - The Three-Dimensional Structural Configuration of the Central Retinal
Vessel Trunk and Branches as a Glaucoma Biomarker [41.97805846007449]
We trained a deep learning network to automatically segment the CRVT&B from the B-scans of the optical coherence tomography volume of the optic nerve head (ONH)
The 3D and 2D diagnostic networks were able to differentiate glaucoma from non-glaucoma subjects with accuracies of 82.7% and 83.3%, respectively.
arXiv Detail & Related papers (2021-11-07T04:41:49Z) - Assessing glaucoma in retinal fundus photographs using Deep Feature
Consistent Variational Autoencoders [63.391402501241195]
glaucoma is challenging to detect since it remains asymptomatic until the symptoms are severe.
Early identification of glaucoma is generally made based on functional, structural, and clinical assessments.
Deep learning methods have partially solved this dilemma by bypassing the marker identification stage and analyzing high-level information directly to classify the data.
arXiv Detail & Related papers (2021-10-04T16:06:49Z) - An Interpretable Multiple-Instance Approach for the Detection of
referable Diabetic Retinopathy from Fundus Images [72.94446225783697]
We propose a machine learning system for the detection of referable Diabetic Retinopathy in fundus images.
By extracting local information from image patches and combining it efficiently through an attention mechanism, our system is able to achieve high classification accuracy.
We evaluate our approach on publicly available retinal image datasets, in which it exhibits near state-of-the-art performance.
arXiv Detail & Related papers (2021-03-02T13:14:15Z) - Revisiting 3D Context Modeling with Supervised Pre-training for
Universal Lesion Detection in CT Slices [48.85784310158493]
We propose a Modified Pseudo-3D Feature Pyramid Network (MP3D FPN) to efficiently extract 3D context enhanced 2D features for universal lesion detection in CT slices.
With the novel pre-training method, the proposed MP3D FPN achieves state-of-the-art detection performance on the DeepLesion dataset.
The proposed 3D pre-trained weights can potentially be used to boost the performance of other 3D medical image analysis tasks.
arXiv Detail & Related papers (2020-12-16T07:11:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.