GAMMA Challenge:Glaucoma grAding from Multi-Modality imAges
- URL: http://arxiv.org/abs/2202.06511v2
- Date: Wed, 16 Feb 2022 02:56:56 GMT
- Title: GAMMA Challenge:Glaucoma grAding from Multi-Modality imAges
- Authors: Junde Wu, Huihui Fang, Fei Li, Huazhu Fu, Fengbin Lin, Jiongcheng Li,
Lexing Huang, Qinji Yu, Sifan Song, Xingxing Xu, Yanyu Xu, Wensai Wang,
Lingxiao Wang, Shuai Lu, Huiqi Li, Shihua Huang, Zhichao Lu, Chubin Ou, Xifei
Wei, Bingyuan Liu, Riadh Kobbi, Xiaoying Tang, Li Lin, Qiang Zhou, Qiang Hu,
Hrvoje Bogunovic, Jos\'e Ignacio Orlando, Xiulan Zhang, Yanwu Xu
- Abstract summary: We set up the Glaucoma grAding from Multi-Modality imAges (GAMMA) Challenge to encourage the development of fundus & OCT-based glaucoma grading.
The primary task of the challenge is to grade glaucoma from both the 2D fundus images and 3D OCT scanning volumes.
We have publicly released a glaucoma annotated dataset with both 2D fundus color photography and 3D OCT volumes, which is the first multi-modality dataset for glaucoma grading.
- Score: 48.98620387924817
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Color fundus photography and Optical Coherence Tomography (OCT) are the two
most cost-effective tools for glaucoma screening. Both two modalities of images
have prominent biomarkers to indicate glaucoma suspected. Clinically, it is
often recommended to take both of the screenings for a more accurate and
reliable diagnosis. However, although numerous algorithms are proposed based on
fundus images or OCT volumes in computer-aided diagnosis, there are still few
methods leveraging both of the modalities for the glaucoma assessment. Inspired
by the success of Retinal Fundus Glaucoma Challenge (REFUGE) we held
previously, we set up the Glaucoma grAding from Multi-Modality imAges (GAMMA)
Challenge to encourage the development of fundus \& OCT-based glaucoma grading.
The primary task of the challenge is to grade glaucoma from both the 2D fundus
images and 3D OCT scanning volumes. As part of GAMMA, we have publicly released
a glaucoma annotated dataset with both 2D fundus color photography and 3D OCT
volumes, which is the first multi-modality dataset for glaucoma grading. In
addition, an evaluation framework is also established to evaluate the
performance of the submitted methods. During the challenge, 1272 results were
submitted, and finally, top-10 teams were selected to the final stage. We
analysis their results and summarize their methods in the paper. Since all
these teams submitted their source code in the challenge, a detailed ablation
study is also conducted to verify the effectiveness of the particular modules
proposed. We find many of the proposed techniques are practical for the
clinical diagnosis of glaucoma. As the first in-depth study of fundus \& OCT
multi-modality glaucoma grading, we believe the GAMMA Challenge will be an
essential starting point for future research.
Related papers
- ELF: An End-to-end Local and Global Multimodal Fusion Framework for
Glaucoma Grading [43.12236694270165]
We propose an end-to-end local and global multi-modal fusion framework for glaucoma grading named ELF.
ELF can fully utilize the complementary information between fundus and OCT.
The extensive experiment conducted on the multi-modal glaucoma grading GAMMA dataset can prove the effiectness of ELF.
arXiv Detail & Related papers (2023-11-14T09:51:00Z) - Segmentation-based Information Extraction and Amalgamation in Fundus
Images for Glaucoma Detection [3.5426952641410496]
The relationship between fundus images and segmentation masks in terms of joint decision-making in glaucoma assessment is rarely explored.
We propose a novel segmentation-based information extraction and amalgamation method for the task of glaucoma detection.
arXiv Detail & Related papers (2022-09-23T07:39:17Z) - Dataset and Evaluation algorithm design for GOALS Challenge [39.424658343179274]
Glaucoma causes irreversible vision loss due to damage to the optic nerve, and there is no cure for glaucoma.
To promote the research of AI technology in quantifying OCT-assisted diagnosis of glaucoma, we held a Glaucoma OCT Analysis and Layer Intervention (GOALS) Challenge.
This paper describes the released 300 circumpapillary OCT images, the baselines of the two sub-tasks, and the evaluation methodology.
arXiv Detail & Related papers (2022-07-29T02:51:26Z) - FetReg2021: A Challenge on Placental Vessel Segmentation and
Registration in Fetoscopy [52.3219875147181]
Fetoscopic laser photocoagulation is a widely adopted procedure for treating Twin-to-Twin Transfusion Syndrome (TTTS)
The procedure is particularly challenging due to the limited field of view, poor manoeuvrability of the fetoscope, poor visibility, and variability in illumination.
Computer-assisted intervention (CAI) can provide surgeons with decision support and context awareness by identifying key structures in the scene and expanding the fetoscopic field of view through video mosaicking.
Seven teams participated in this challenge and their model performance was assessed on an unseen test dataset of 658 pixel-annotated images from 6 fet
arXiv Detail & Related papers (2022-06-24T23:44:42Z) - Geometric Deep Learning to Identify the Critical 3D Structural Features
of the Optic Nerve Head for Glaucoma Diagnosis [52.06403518904579]
The optic nerve head (ONH) undergoes complex and deep 3D morphological changes during the development and progression of glaucoma.
We used PointNet and dynamic graph convolutional neural network (DGCNN) to diagnose glaucoma from 3D ONH point clouds.
Our approach may have strong potential to be used in clinical applications for the diagnosis and prognosis of a wide range of ophthalmic disorders.
arXiv Detail & Related papers (2022-04-14T12:52:10Z) - REFUGE2 Challenge: Treasure for Multi-Domain Learning in Glaucoma
Assessment [45.41988445653055]
REFUGE2 challenge released 2,000 color fundus images of four models, including Zeiss, Canon, Kowa and Topcon.
Three sub-tasks were designed in the challenge, including glaucoma classification, cup/optic disc segmentation, and macular fovea localization.
This article summarizes the methods of some of the finalists and analyzes their results.
arXiv Detail & Related papers (2022-02-18T02:56:21Z) - COROLLA: An Efficient Multi-Modality Fusion Framework with Supervised
Contrastive Learning for Glaucoma Grading [1.2250035750661867]
We propose an efficient multi-modality supervised contrastive learning framework, named COROLLA, for glaucoma grading.
We employ supervised contrastive learning to increase our models' discriminative power with better convergence.
On the GAMMA dataset, our COROLLA framework achieves overwhelming glaucoma grading performance compared to state-of-the-art methods.
arXiv Detail & Related papers (2022-01-11T06:00:51Z) - Assessing glaucoma in retinal fundus photographs using Deep Feature
Consistent Variational Autoencoders [63.391402501241195]
glaucoma is challenging to detect since it remains asymptomatic until the symptoms are severe.
Early identification of glaucoma is generally made based on functional, structural, and clinical assessments.
Deep learning methods have partially solved this dilemma by bypassing the marker identification stage and analyzing high-level information directly to classify the data.
arXiv Detail & Related papers (2021-10-04T16:06:49Z) - AGE Challenge: Angle Closure Glaucoma Evaluation in Anterior Segment
Optical Coherence Tomography [61.405005501608706]
Angle closure glaucoma (ACG) is a more aggressive disease than open-angle glaucoma.
Anterior Segment Optical Coherence Tomography (AS- OCT) imaging provides a fast and contactless way to discriminate angle closure from open angle.
There is no public AS- OCT dataset available for evaluating the existing methods in a uniform way.
We organized the Angle closure Glaucoma Evaluation challenge (AGE), held in conjunction with MICCAI 2019.
arXiv Detail & Related papers (2020-05-05T14:55:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.