Artifact-Tolerant Clustering-Guided Contrastive Embedding Learning for
Ophthalmic Images
- URL: http://arxiv.org/abs/2209.00773v1
- Date: Fri, 2 Sep 2022 01:25:45 GMT
- Title: Artifact-Tolerant Clustering-Guided Contrastive Embedding Learning for
Ophthalmic Images
- Authors: Min Shi, Anagha Lokhande, Mojtaba S. Fazli, Vishal Sharma, Yu Tian,
Yan Luo, Louis R. Pasquale, Tobias Elze, Michael V. Boland, Nazlee Zebardast,
David S. Friedman, Lucy Q. Shen, Mengyu Wang
- Abstract summary: We propose an artifact-tolerant unsupervised learning framework termed EyeLearn for learning representations of ophthalmic images.
EyeLearn has an artifact correction module to learn representations that can best predict artifact-free ophthalmic images.
To evaluate EyeLearn, we use the learned representations for visual field prediction and glaucoma detection using a real-world ophthalmic image dataset of glaucoma patients.
- Score: 18.186766129476077
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Ophthalmic images and derivatives such as the retinal nerve fiber layer
(RNFL) thickness map are crucial for detecting and monitoring ophthalmic
diseases (e.g., glaucoma). For computer-aided diagnosis of eye diseases, the
key technique is to automatically extract meaningful features from ophthalmic
images that can reveal the biomarkers (e.g., RNFL thinning patterns) linked to
functional vision loss. However, representation learning from ophthalmic images
that links structural retinal damage with human vision loss is non-trivial
mostly due to large anatomical variations between patients. The task becomes
even more challenging in the presence of image artifacts, which are common due
to issues with image acquisition and automated segmentation. In this paper, we
propose an artifact-tolerant unsupervised learning framework termed EyeLearn
for learning representations of ophthalmic images. EyeLearn has an artifact
correction module to learn representations that can best predict artifact-free
ophthalmic images. In addition, EyeLearn adopts a clustering-guided contrastive
learning strategy to explicitly capture the intra- and inter-image affinities.
During training, images are dynamically organized in clusters to form
contrastive samples in which images in the same or different clusters are
encouraged to learn similar or dissimilar representations, respectively. To
evaluate EyeLearn, we use the learned representations for visual field
prediction and glaucoma detection using a real-world ophthalmic image dataset
of glaucoma patients. Extensive experiments and comparisons with
state-of-the-art methods verified the effectiveness of EyeLearn for learning
optimal feature representations from ophthalmic images.
Related papers
- Deep Domain Adaptation: A Sim2Real Neural Approach for Improving Eye-Tracking Systems [80.62854148838359]
Eye image segmentation is a critical step in eye tracking that has great influence over the final gaze estimate.
We use dimensionality-reduction techniques to measure the overlap between the target eye images and synthetic training data.
Our methods result in robust, improved performance when tackling the discrepancy between simulation and real-world data samples.
arXiv Detail & Related papers (2024-03-23T22:32:06Z) - Disentangling representations of retinal images with generative models [12.547633373232026]
We introduce a novel population model for retinal fundus images that disentangles patient attributes from camera effects.
Our results show that our model provides a new perspective on the complex relationship between patient attributes and technical confounders in retinal fundus image generation.
arXiv Detail & Related papers (2024-02-29T14:11:08Z) - Generating Realistic Counterfactuals for Retinal Fundus and OCT Images
using Diffusion Models [36.81751569090276]
Counterfactual reasoning is often used in clinical settings to explain decisions or weigh alternatives.
Here, we demonstrate that using a diffusion model in combination with an adversarially robust classifier trained on retinal disease classification tasks enables the generation of highly realistic counterfactuals.
In a user study, domain experts found the counterfactuals generated using our method significantly more realistic than counterfactuals generated from a previous method, and even indistinguishable from real images.
arXiv Detail & Related papers (2023-11-20T09:28:04Z) - GraVIS: Grouping Augmented Views from Independent Sources for
Dermatology Analysis [52.04899592688968]
We propose GraVIS, which is specifically optimized for learning self-supervised features from dermatology images.
GraVIS significantly outperforms its transfer learning and self-supervised learning counterparts in both lesion segmentation and disease classification tasks.
arXiv Detail & Related papers (2023-01-11T11:38:37Z) - A domain adaptive deep learning solution for scanpath prediction of
paintings [66.46953851227454]
This paper focuses on the eye-movement analysis of viewers during the visual experience of a certain number of paintings.
We introduce a new approach to predicting human visual attention, which impacts several cognitive functions for humans.
The proposed new architecture ingests images and returns scanpaths, a sequence of points featuring a high likelihood of catching viewers' attention.
arXiv Detail & Related papers (2022-09-22T22:27:08Z) - Self-Supervised Vision Transformers Learn Visual Concepts in
Histopathology [5.164102666113966]
We conduct a search for good representations in pathology by training a variety of self-supervised models with validation on a variety of weakly-supervised and patch-level tasks.
Our key finding is in discovering that Vision Transformers using DINO-based knowledge distillation are able to learn data-efficient and interpretable features in histology images.
arXiv Detail & Related papers (2022-03-01T16:14:41Z) - MTCD: Cataract Detection via Near Infrared Eye Images [69.62768493464053]
cataract is a common eye disease and one of the leading causes of blindness and vision impairment.
We present a novel algorithm for cataract detection using near-infrared eye images.
Deep learning-based eye segmentation and multitask network classification networks are presented.
arXiv Detail & Related papers (2021-10-06T08:10:28Z) - A Semi-Supervised Classification Method of Apicomplexan Parasites and
Host Cell Using Contrastive Learning Strategy [6.677163460963862]
This paper proposes a semi-supervised classification method for three kinds of apicomplexan parasites and non-infected host cells microscopic images.
It uses a small number of labeled data and a large number of unlabeled data for training.
In the case where only 1% of microscopic images are labeled, the proposed method reaches an accuracy of 94.90% in a generalized testing set.
arXiv Detail & Related papers (2021-04-14T02:34:50Z) - An Interpretable Multiple-Instance Approach for the Detection of
referable Diabetic Retinopathy from Fundus Images [72.94446225783697]
We propose a machine learning system for the detection of referable Diabetic Retinopathy in fundus images.
By extracting local information from image patches and combining it efficiently through an attention mechanism, our system is able to achieve high classification accuracy.
We evaluate our approach on publicly available retinal image datasets, in which it exhibits near state-of-the-art performance.
arXiv Detail & Related papers (2021-03-02T13:14:15Z) - Graph Neural Networks for UnsupervisedDomain Adaptation of
Histopathological ImageAnalytics [22.04114134677181]
We present a novel method for the unsupervised domain adaptation for histological image analysis.
It is based on a backbone for embedding images into a feature space, and a graph neural layer for propa-gating the supervision signals of images with labels.
In experiments, our methodachieves state-of-the-art performance on four public datasets.
arXiv Detail & Related papers (2020-08-21T04:53:44Z) - Modeling and Enhancing Low-quality Retinal Fundus Images [167.02325845822276]
Low-quality fundus images increase uncertainty in clinical observation and lead to the risk of misdiagnosis.
We propose a clinically oriented fundus enhancement network (cofe-Net) to suppress global degradation factors.
Experiments on both synthetic and real images demonstrate that our algorithm effectively corrects low-quality fundus images without losing retinal details.
arXiv Detail & Related papers (2020-05-12T08:01:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.