Real-Time Glaucoma Detection from Digital Fundus Images using Self-ONNs
- URL: http://arxiv.org/abs/2109.13604v1
- Date: Tue, 28 Sep 2021 10:27:01 GMT
- Title: Real-Time Glaucoma Detection from Digital Fundus Images using Self-ONNs
- Authors: Ozer Can Devecioglu, Junaid Malik, Turker Ince, Serkan Kiranyaz, Eray
Atalay, and Moncef Gabbouj
- Abstract summary: Glaucoma leads to permanent vision disability by damaging the optical nerve that transmits visual images to the brain.
Various deep learning models have been applied for detecting glaucoma from digital fundus images, due to the scarcity of labeled data.
In this study, compact Self-Organized Operational Neural Networks (Self- ONNs) are proposed for early detection of glaucoma in fundus images.
- Score: 22.863901758361692
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Glaucoma leads to permanent vision disability by damaging the optical nerve
that transmits visual images to the brain. The fact that glaucoma does not show
any symptoms as it progresses and cannot be stopped at the later stages, makes
it critical to be diagnosed in its early stages. Although various deep learning
models have been applied for detecting glaucoma from digital fundus images, due
to the scarcity of labeled data, their generalization performance was limited
along with high computational complexity and special hardware requirements. In
this study, compact Self-Organized Operational Neural Networks (Self- ONNs) are
proposed for early detection of glaucoma in fundus images and their performance
is compared against the conventional (deep) Convolutional Neural Networks
(CNNs) over three benchmark datasets: ACRIMA, RIM-ONE, and ESOGU. The
experimental results demonstrate that Self-ONNs not only achieve superior
detection performance but can also significantly reduce the computational
complexity making it a potentially suitable network model for biomedical
datasets especially when the data is scarce.
Related papers
- OAH-Net: A Deep Neural Network for Hologram Reconstruction of Off-axis Digital Holographic Microscope [5.835347176172883]
We propose a novel reconstruction approach that integrates deep learning with the physical principles of off-axis holography.
Our off-axis hologram network (OAH-Net) retrieves phase and amplitude images with errors that fall within the measurement error range attributable to hardware.
This capability further expands off-axis holography's applications in both biological and medical studies.
arXiv Detail & Related papers (2024-10-17T14:25:18Z) - InceptionCaps: A Performant Glaucoma Classification Model for
Data-scarce Environment [0.0]
glaucoma is an irreversible ocular disease and is the second leading cause of visual disability worldwide.
This work reviews existing state of the art models and proposes InceptionCaps, a novel capsule network (CapsNet) based deep learning model having pre-trained InceptionV3 as its convolution base, for automatic glaucoma classification.
InceptionCaps achieved an accuracy of 0.956, specificity of 0.96, and AUC of 0.9556, which surpasses several state-of-the-art deep learning model performances on the RIM-ONE v2 dataset.
arXiv Detail & Related papers (2023-11-24T11:58:11Z) - K-Space-Aware Cross-Modality Score for Synthesized Neuroimage Quality
Assessment [71.27193056354741]
The problem of how to assess cross-modality medical image synthesis has been largely unexplored.
We propose a new metric K-CROSS to spur progress on this challenging problem.
K-CROSS uses a pre-trained multi-modality segmentation network to predict the lesion location.
arXiv Detail & Related papers (2023-07-10T01:26:48Z) - Application of attention-based Siamese composite neural network in medical image recognition [6.370635116365471]
This study has established a recognition model based on attention and Siamese neural network.
The Attention-Based neural network is used as the main network to improve the classification effect.
The results show that the less the number of image samples are, the more obvious the advantage shows.
arXiv Detail & Related papers (2023-04-19T16:09:59Z) - Self-Supervised Endoscopic Image Key-Points Matching [1.3764085113103222]
This paper proposes a novel self-supervised approach for endoscopic image matching based on deep learning techniques.
Our method outperformed standard hand-crafted local feature descriptors in terms of precision and recall.
arXiv Detail & Related papers (2022-08-24T10:47:21Z) - RADNet: Ensemble Model for Robust Glaucoma Classification in Color
Fundus Images [0.0]
Glaucoma is one of the most severe eye diseases, characterized by rapid progression and leading to irreversible blindness.
Regular glaucoma screenings of the population shall improve early-stage detection, however the desirable frequency of etymological checkups is often not feasible.
In our work, we propose an advanced image pre-processing technique combined with an ensemble of deep classification networks.
arXiv Detail & Related papers (2022-05-25T16:48:00Z) - Assessing glaucoma in retinal fundus photographs using Deep Feature
Consistent Variational Autoencoders [63.391402501241195]
glaucoma is challenging to detect since it remains asymptomatic until the symptoms are severe.
Early identification of glaucoma is generally made based on functional, structural, and clinical assessments.
Deep learning methods have partially solved this dilemma by bypassing the marker identification stage and analyzing high-level information directly to classify the data.
arXiv Detail & Related papers (2021-10-04T16:06:49Z) - An Interpretable Multiple-Instance Approach for the Detection of
referable Diabetic Retinopathy from Fundus Images [72.94446225783697]
We propose a machine learning system for the detection of referable Diabetic Retinopathy in fundus images.
By extracting local information from image patches and combining it efficiently through an attention mechanism, our system is able to achieve high classification accuracy.
We evaluate our approach on publicly available retinal image datasets, in which it exhibits near state-of-the-art performance.
arXiv Detail & Related papers (2021-03-02T13:14:15Z) - NuI-Go: Recursive Non-Local Encoder-Decoder Network for Retinal Image
Non-Uniform Illumination Removal [96.12120000492962]
The quality of retinal images is often clinically unsatisfactory due to eye lesions and imperfect imaging process.
One of the most challenging quality degradation issues in retinal images is non-uniform illumination.
We propose a non-uniform illumination removal network for retinal image, called NuI-Go.
arXiv Detail & Related papers (2020-08-07T04:31:33Z) - Modeling and Enhancing Low-quality Retinal Fundus Images [167.02325845822276]
Low-quality fundus images increase uncertainty in clinical observation and lead to the risk of misdiagnosis.
We propose a clinically oriented fundus enhancement network (cofe-Net) to suppress global degradation factors.
Experiments on both synthetic and real images demonstrate that our algorithm effectively corrects low-quality fundus images without losing retinal details.
arXiv Detail & Related papers (2020-05-12T08:01:16Z) - Retinopathy of Prematurity Stage Diagnosis Using Object Segmentation and
Convolutional Neural Networks [68.96150598294072]
Retinopathy of Prematurity (ROP) is an eye disorder primarily affecting premature infants with lower weights.
It causes proliferation of vessels in the retina and could result in vision loss and, eventually, retinal detachment, leading to blindness.
In recent years, there has been a significant effort to automate the diagnosis using deep learning.
This paper builds upon the success of previous models and develops a novel architecture, which combines object segmentation and convolutional neural networks (CNN)
Our proposed system first trains an object segmentation model to identify the demarcation line at a pixel level and adds the resulting mask as an additional "color" channel in
arXiv Detail & Related papers (2020-04-03T14:07:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.