Automated Artifact Detection in Ultra-widefield Fundus Photography of
Patients with Sickle Cell Disease
- URL: http://arxiv.org/abs/2307.05780v1
- Date: Tue, 11 Jul 2023 20:17:47 GMT
- Title: Automated Artifact Detection in Ultra-widefield Fundus Photography of
Patients with Sickle Cell Disease
- Authors: Anqi Feng, Dimitri Johnson, Grace R. Reilly, Loka Thangamathesvaran,
Ann Nampomba, Mathias Unberath, Adrienne W. Scott, Craig Jones
- Abstract summary: The aim of this study was to create an automated algorithm for UWF-FP artifact classification.
The accuracy for each class was Eyelash Present at 83.7%, Lower Eyelid Obstructing at 83.7%, Upper Eyelid Obstructing at 98.0%, Image Too Dark at 77.6%, Dark Artifact at 93.9%, and Image Not Centered at 91.8%.
- Score: 4.4400645551116815
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Importance: Ultra-widefield fundus photography (UWF-FP) has shown utility in
sickle cell retinopathy screening; however, image artifact may diminish quality
and gradeability of images. Objective: To create an automated algorithm for
UWF-FP artifact classification. Design: A neural network based automated
artifact detection algorithm was designed to identify commonly encountered
UWF-FP artifacts in a cross section of patient UWF-FP. A pre-trained ResNet-50
neural network was trained on a subset of the images and the classification
accuracy, sensitivity, and specificity were quantified on the hold out test
set. Setting: The study is based on patients from a tertiary care hospital
site. Participants: There were 243 UWF-FP acquired from patients with sickle
cell disease (SCD), and artifact labelling in the following categories was
performed: Eyelash Present, Lower Eyelid Obstructing, Upper Eyelid Obstructing,
Image Too Dark, Dark Artifact, and Image Not Centered. Results: Overall, the
accuracy for each class was Eyelash Present at 83.7%, Lower Eyelid Obstructing
at 83.7%, Upper Eyelid Obstructing at 98.0%, Image Too Dark at 77.6%, Dark
Artifact at 93.9%, and Image Not Centered at 91.8%. Conclusions and Relevance:
This automated algorithm shows promise in identifying common imaging artifacts
on a subset of Optos UWF-FP in SCD patients. Further refinement is ongoing with
the goal of improving efficiency of tele-retinal screening in sickle cell
retinopathy (SCR) by providing a photographer real-time feedback as to the
types of artifacts present, and the need for image re-acquisition. This
algorithm also may have potential future applicability in other retinal
diseases by improving quality and efficiency of image acquisition of UWF-FP.
Related papers
- UWF-RI2FA: Generating Multi-frame Ultrawide-field Fluorescein Angiography from Ultrawide-field Retinal Imaging Improves Diabetic Retinopathy Stratification [10.833651195216557]
We aim to acquire dye-free UWF-FA images from noninvasive UWF retinal imaging (UWF-RI) using generative artificial intelligence (GenAI)
A total of 18,321 UWF-FA images of different phases were registered with corresponding UWF-RI images and fed into a generative adversarial networks (GAN)-based model for training.
The quality of generated UWF-FA images was evaluated through quantitative metrics and human evaluation.
arXiv Detail & Related papers (2024-08-20T08:22:29Z) - Explainable Convolutional Neural Networks for Retinal Fundus Classification and Cutting-Edge Segmentation Models for Retinal Blood Vessels from Fundus Images [0.0]
Research focuses on the critical field of early diagnosis of disease by examining retinal blood vessels in fundus images.
Our research in fundus image analysis advances deep learning-based classification using eight pre-trained CNN models.
To enhance interpretability, we utilize Explainable AI techniques such as Grad-CAM, Grad-CAM++, Score-CAM, Faster Score-CAM, and Layer CAM.
arXiv Detail & Related papers (2024-05-12T17:21:57Z) - UWAFA-GAN: Ultra-Wide-Angle Fluorescein Angiography Transformation via Multi-scale Generation and Registration Enhancement [17.28459176559761]
UWF fluorescein angiography (UWF-FA) requires the administration of a fluorescent dye via injection into the patient's hand or elbow.
To mitigate potential adverse effects associated with injections, researchers have proposed the development of cross-modality medical image generation algorithms.
We introduce a novel conditional generative adversarial network (UWAFA-GAN) to synthesize UWF-FA from UWF-SLO.
arXiv Detail & Related papers (2024-05-01T14:27:43Z) - Uncertainty-inspired Open Set Learning for Retinal Anomaly
Identification [71.06194656633447]
We establish an uncertainty-inspired open-set (UIOS) model, which was trained with fundus images of 9 retinal conditions.
Our UIOS model with thresholding strategy achieved an F1 score of 99.55%, 97.01% and 91.91% for the internal testing set.
UIOS correctly predicted high uncertainty scores, which would prompt the need for a manual check in the datasets of non-target categories retinal diseases, low-quality fundus images, and non-fundus images.
arXiv Detail & Related papers (2023-04-08T10:47:41Z) - Automated SSIM Regression for Detection and Quantification of Motion
Artefacts in Brain MR Images [54.739076152240024]
Motion artefacts in magnetic resonance brain images are a crucial issue.
The assessment of MR image quality is fundamental before proceeding with the clinical diagnosis.
An automated image quality assessment based on the structural similarity index (SSIM) regression has been proposed here.
arXiv Detail & Related papers (2022-06-14T10:16:54Z) - Performance of a deep learning system for detection of referable
diabetic retinopathy in real clinical settings [0.0]
RetCAD v.1.3.1 was developed to automatically detect referable diabetic retinopathy (DR)
Analysed the reduction of workload that can be released incorporating this artificial intelligence-based technology.
arXiv Detail & Related papers (2022-05-11T14:59:10Z) - Osteoporosis Prescreening using Panoramic Radiographs through a Deep
Convolutional Neural Network with Attention Mechanism [65.70943212672023]
Deep convolutional neural network (CNN) with an attention module can detect osteoporosis on panoramic radiographs.
dataset of 70 panoramic radiographs (PRs) from 70 different subjects of age between 49 to 60 was used.
arXiv Detail & Related papers (2021-10-19T00:03:57Z) - The Report on China-Spain Joint Clinical Testing for Rapid COVID-19 Risk
Screening by Eye-region Manifestations [59.48245489413308]
We developed and tested a COVID-19 rapid prescreening model using the eye-region images captured in China and Spain with cellphone cameras.
The performance was measured using area under receiver-operating-characteristic curve (AUC), sensitivity, specificity, accuracy, and F1.
arXiv Detail & Related papers (2021-09-18T02:28:01Z) - Cervical Optical Coherence Tomography Image Classification Based on
Contrastive Self-Supervised Texture Learning [2.674926127069043]
This study aims to develop a computer-aided diagnosis (CADx) approach to classifying in-vivo cervical OCT images based on self-supervised learning.
Besides high-level semantic features extracted by a convolutional neural network (CNN), the proposed CADx approach leverages unlabeled cervical OCT images' texture features learned by contrastive texture learning.
arXiv Detail & Related papers (2021-08-11T07:52:59Z) - Vision Transformers for femur fracture classification [59.99241204074268]
The Vision Transformer (ViT) was able to correctly predict 83% of the test images.
Good results were obtained in sub-fractures with the largest and richest dataset ever.
arXiv Detail & Related papers (2021-08-07T10:12:42Z) - Modeling and Enhancing Low-quality Retinal Fundus Images [167.02325845822276]
Low-quality fundus images increase uncertainty in clinical observation and lead to the risk of misdiagnosis.
We propose a clinically oriented fundus enhancement network (cofe-Net) to suppress global degradation factors.
Experiments on both synthetic and real images demonstrate that our algorithm effectively corrects low-quality fundus images without losing retinal details.
arXiv Detail & Related papers (2020-05-12T08:01:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.