Detection of multiple retinal diseases in ultra-widefield fundus images
using deep learning: data-driven identification of relevant regions
- URL: http://arxiv.org/abs/2203.06113v1
- Date: Fri, 11 Mar 2022 17:33:33 GMT
- Title: Detection of multiple retinal diseases in ultra-widefield fundus images
using deep learning: data-driven identification of relevant regions
- Authors: Justin Engelmann, Alice D. McTrusty, Ian J. C. MacCormick, Emma Pead,
Amos Storkey, Miguel O. Bernabeu
- Abstract summary: Ultra-widefield (UWF) imaging is a promising modality that captures a larger retinal field of view.
Previous studies showed that deep learning (DL) models are effective for detecting retinal disease in UWF images.
We propose a DL model that can recognise multiple retinal diseases under more realistic conditions.
- Score: 2.20200533591633
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ultra-widefield (UWF) imaging is a promising modality that captures a larger
retinal field of view compared to traditional fundus photography. Previous
studies showed that deep learning (DL) models are effective for detecting
retinal disease in UWF images, but primarily considered individual diseases
under less-than-realistic conditions (excluding images with other diseases,
artefacts, comorbidities, or borderline cases; and balancing healthy and
diseased images) and did not systematically investigate which regions of the
UWF images are relevant for disease detection. We first improve on the state of
the field by proposing a DL model that can recognise multiple retinal diseases
under more realistic conditions. We then use global explainability methods to
identify which regions of the UWF images the model generally attends to. Our
model performs very well, separating between healthy and diseased retinas with
an area under the curve (AUC) of 0.9206 on an internal test set, and an AUC of
0.9841 on a challenging, external test set. When diagnosing specific diseases,
the model attends to regions where we would expect those diseases to occur. We
further identify the posterior pole as the most important region in a purely
data-driven fashion. Surprisingly, 10% of the image around the posterior pole
is sufficient for achieving comparable performance to having the full images
available.
Related papers
- EyeDiff: text-to-image diffusion model improves rare eye disease diagnosis [7.884451100342276]
EyeDiff is a text-to-image model designed to generate multimodal ophthalmic images from natural language prompts.
EyeDiff is trained on eight large-scale datasets and is adapted to ten multi-country external datasets.
arXiv Detail & Related papers (2024-11-15T07:30:53Z) - A Disease-Specific Foundation Model Using Over 100K Fundus Images: Release and Validation for Abnormality and Multi-Disease Classification on Downstream Tasks [0.0]
We developed a Fundus-Specific Pretrained Model (Image+Fundus), a supervised artificial intelligence model trained to detect abnormalities in fundus images.
A total of 57,803 images were used to develop this pretrained model, which achieved superior performance across various downstream tasks.
arXiv Detail & Related papers (2024-08-16T15:03:06Z) - Common and Rare Fundus Diseases Identification Using Vision-Language Foundation Model with Knowledge of Over 400 Diseases [57.27458882764811]
Previous foundation models for retinal images were pre-trained with limited disease categories and knowledge base.
To RetiZero's pre-training, we compiled 341,896 fundus images paired with text descriptions, sourced from public datasets, ophthalmic literature, and online resources.
RetiZero exhibits superior performance in several downstream tasks, including zero-shot disease recognition, image-to-image retrieval, and internal- and cross-domain disease identification.
arXiv Detail & Related papers (2024-06-13T16:53:57Z) - Diagnosis of Multiple Fundus Disorders Amidst a Scarcity of Medical Experts Via Self-supervised Machine Learning [13.174267261284733]
Fundus diseases are major causes of visual impairment and blindness worldwide.
We propose a general self-supervised machine learning framework that can handle diverse fundus diseases from unlabeled fundus images.
arXiv Detail & Related papers (2024-04-20T14:15:25Z) - Adapting Visual-Language Models for Generalizable Anomaly Detection in Medical Images [68.42215385041114]
This paper introduces a novel lightweight multi-level adaptation and comparison framework to repurpose the CLIP model for medical anomaly detection.
Our approach integrates multiple residual adapters into the pre-trained visual encoder, enabling a stepwise enhancement of visual features across different levels.
Our experiments on medical anomaly detection benchmarks demonstrate that our method significantly surpasses current state-of-the-art models.
arXiv Detail & Related papers (2024-03-19T09:28:19Z) - Generative Residual Attention Network for Disease Detection [51.60842580044539]
We present a novel approach for disease generation in X-rays using a conditional generative adversarial learning.
We generate a corresponding radiology image in a target domain while preserving the identity of the patient.
We then use the generated X-ray image in the target domain to augment our training to improve the detection performance.
arXiv Detail & Related papers (2021-10-25T14:15:57Z) - Contrastive Attention for Automatic Chest X-ray Report Generation [124.60087367316531]
In most cases, the normal regions dominate the entire chest X-ray image, and the corresponding descriptions of these normal regions dominate the final report.
We propose Contrastive Attention (CA) model, which compares the current input image with normal images to distill the contrastive information.
We achieve the state-of-the-art results on the two public datasets.
arXiv Detail & Related papers (2021-06-13T11:20:31Z) - An Interpretable Multiple-Instance Approach for the Detection of
referable Diabetic Retinopathy from Fundus Images [72.94446225783697]
We propose a machine learning system for the detection of referable Diabetic Retinopathy in fundus images.
By extracting local information from image patches and combining it efficiently through an attention mechanism, our system is able to achieve high classification accuracy.
We evaluate our approach on publicly available retinal image datasets, in which it exhibits near state-of-the-art performance.
arXiv Detail & Related papers (2021-03-02T13:14:15Z) - Weakly Supervised Thoracic Disease Localization via Disease Masks [29.065791290544983]
weakly supervised localization methods have been proposed that use only image-level annotation.
We propose a spatial attention method using disease masks that describe the areas where diseases mainly occur.
We show that the proposed method results in superior localization performances compared to state-of-the-art methods.
arXiv Detail & Related papers (2021-01-25T06:52:57Z) - Leveraging Regular Fundus Images for Training UWF Fundus Diagnosis
Models via Adversarial Learning and Pseudo-Labeling [29.009663623719064]
Ultra-widefield (UWF) 200degreefundus imaging by Optos cameras has gradually been introduced.
Regular fundus images contain a large amount of high-quality and well-annotated data.
Due to the domain gap, models trained by regular fundus images to recognize UWF fundus images perform poorly.
We propose the use of a modified cycle generative adversarial network (CycleGAN) model to bridge the gap between regular and UWF fundus.
arXiv Detail & Related papers (2020-11-27T16:25:30Z) - Modeling and Enhancing Low-quality Retinal Fundus Images [167.02325845822276]
Low-quality fundus images increase uncertainty in clinical observation and lead to the risk of misdiagnosis.
We propose a clinically oriented fundus enhancement network (cofe-Net) to suppress global degradation factors.
Experiments on both synthetic and real images demonstrate that our algorithm effectively corrects low-quality fundus images without losing retinal details.
arXiv Detail & Related papers (2020-05-12T08:01:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.