Deep Learning Ensemble for Predicting Diabetic Macular Edema Onset Using Ultra-Wide Field Color Fundus Image
- URL: http://arxiv.org/abs/2410.06483v2
- Date: Mon, 09 Dec 2024 21:38:56 GMT
- Title: Deep Learning Ensemble for Predicting Diabetic Macular Edema Onset Using Ultra-Wide Field Color Fundus Image
- Authors: Pengyao Qin, Arun J. Thirunavukarasu, Theodoros Arvanitis, Le Zhang,
- Abstract summary: Diabetic macular edema (DME) is a severe complication of diabetes.
We propose an ensemble method to predict ci-DME onset within a year.
- Score: 2.9945018168793025
- License:
- Abstract: Diabetic macular edema (DME) is a severe complication of diabetes, characterized by thickening of the central portion of the retina due to accumulation of fluid. DME is a significant and common cause of visual impairment in diabetic patients. Center-involved DME (ci-DME) is the highest risk form of disease because fluid extends close to the fovea which is responsible for sharp central vision. Earlier diagnosis or prediction of ci-DME may improve treatment outcomes. Here, we propose an ensemble method to predict ci-DME onset within a year, after using synthetic ultra-wide field color fundus photography (UWF-CFP) images provided by the DIAMOND Challenge during development. We adopted a variety of baseline state-of-the-art classification networks including ResNet, DenseNet, EfficientNet, and VGG with the aim of enhancing model robustness. The best performing models were Densenet-121, Resnet-152 and EfficientNet-b7, and these were assembled into a definitive predictive model. The final ensemble model demonstrates a strong performance with an Area Under Curve (AUC) of 0.7017, an F1 score of 0.6512, and an Expected Calibration Error (ECE) of 0.2057 when deployed on the synthetic test dataset. Results from our ensemble model were superior/comparable to previous recorded results in highly curated settings using conventional fundus photography/ultra-wide field fundus photography. Optimal sensitivity in previous studies (using humans or computers to diagnose) ranges from 67.3%-98%, specificity from 47.8%-80%. Therefore, our method can be used safely and effectively in a range of settings may facilitate earlier diagnosis, better treatment decisions, and improved prognostication in ci-DME.
Related papers
- Is an Ultra Large Natural Image-Based Foundation Model Superior to a Retina-Specific Model for Detecting Ocular and Systemic Diseases? [15.146396276161937]
RETFound and DINOv2 models were evaluated for ocular disease detection and systemic disease prediction tasks.
RETFound achieved superior performance over all DINOv2 models in predicting heart failure, infarction, and ischaemic stroke.
arXiv Detail & Related papers (2025-02-10T09:31:39Z) - Controllable retinal image synthesis using conditional StyleGAN and latent space manipulation for improved diagnosis and grading of diabetic retinopathy [0.0]
This paper proposes a framework for controllably generating high-fidelity and diverse DR fundus images.
We achieve comprehensive control over DR severity and visual features within generated images.
We manipulate the DR images generated conditionally on grades, further enhancing the dataset diversity.
arXiv Detail & Related papers (2024-09-11T17:08:28Z) - Adapting Visual-Language Models for Generalizable Anomaly Detection in Medical Images [68.42215385041114]
This paper introduces a novel lightweight multi-level adaptation and comparison framework to repurpose the CLIP model for medical anomaly detection.
Our approach integrates multiple residual adapters into the pre-trained visual encoder, enabling a stepwise enhancement of visual features across different levels.
Our experiments on medical anomaly detection benchmarks demonstrate that our method significantly surpasses current state-of-the-art models.
arXiv Detail & Related papers (2024-03-19T09:28:19Z) - Multi-scale Spatio-temporal Transformer-based Imbalanced Longitudinal
Learning for Glaucoma Forecasting from Irregular Time Series Images [45.894671834869975]
Glaucoma is one of the major eye diseases that leads to progressive optic nerve fiber damage and irreversible blindness.
We introduce the Multi-scale Spatio-temporal Transformer Network (MST-former) based on the transformer architecture tailored for sequential image inputs.
Our method shows excellent generalization capability on the Alzheimer's Disease Neuroimaging Initiative (ADNI) MRI dataset, with an accuracy of 90.3% for mild cognitive impairment and Alzheimer's disease prediction.
arXiv Detail & Related papers (2024-02-21T02:16:59Z) - AMDNet23: A combined deep Contour-based Convolutional Neural Network and
Long Short Term Memory system to diagnose Age-related Macular Degeneration [0.0]
This study operates on a AMDNet23 system of deep learning that combined the neural networks made up of convolutions (CNN) and short-term and long-term memory (LSTM) to automatically detect aged macular degeneration (AMD) disease from fundus ophthalmology.
The proposed hybrid deep AMDNet23 model demonstrates to detection of AMD ocular disease and the experimental result achieved an accuracy 96.50%, specificity 99.32%, sensitivity 96.5%, and F1-score 96.49.0%.
arXiv Detail & Related papers (2023-08-30T07:48:32Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - An Ensemble Method to Automatically Grade Diabetic Retinopathy with
Optical Coherence Tomography Angiography Images [4.640835690336653]
We propose an ensemble method to automatically grade Diabetic retinopathy (DR) images available from Diabetic Retinopathy Analysis Challenge (DRAC) 2022.
First, we adopt the state-of-the-art classification networks, and train them to grade UW- OCTA images with different splits of the available dataset.
Ultimately, we obtain 25 models, of which, the top 16 models are selected and ensembled to generate the final predictions.
arXiv Detail & Related papers (2022-12-12T22:06:47Z) - Vision Transformers for femur fracture classification [59.99241204074268]
The Vision Transformer (ViT) was able to correctly predict 83% of the test images.
Good results were obtained in sub-fractures with the largest and richest dataset ever.
arXiv Detail & Related papers (2021-08-07T10:12:42Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - Robust Deep AUC Maximization: A New Surrogate Loss and Empirical Studies
on Medical Image Classification [63.44396343014749]
We propose a new margin-based surrogate loss function for the AUC score.
It is more robust than the commonly used.
square loss while enjoying the same advantage in terms of large-scale optimization.
To the best of our knowledge, this is the first work that makes DAM succeed on large-scale medical image datasets.
arXiv Detail & Related papers (2020-12-06T03:41:51Z) - Blended Multi-Modal Deep ConvNet Features for Diabetic Retinopathy
Severity Prediction [0.0]
Diabetic Retinopathy (DR) is one of the major causes of visual impairment and blindness across the world.
To derive optimal representation of retinal images, features extracted from multiple pre-trained ConvNet models are blended.
We achieve an accuracy of 97.41%, and a kappa statistic of 94.82 for DR identification and an accuracy of 81.7% and a kappa statistic of 71.1% for severity level prediction.
arXiv Detail & Related papers (2020-05-30T06:46:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.