FundaQ-8: A Clinically-Inspired Scoring Framework for Automated Fundus Image Quality Assessment
- URL: http://arxiv.org/abs/2506.20303v1
- Date: Wed, 25 Jun 2025 10:28:53 GMT
- Title: FundaQ-8: A Clinically-Inspired Scoring Framework for Automated Fundus Image Quality Assessment
- Authors: Lee Qi Zun, Oscar Wong Jin Hao, Nor Anita Binti Che Omar, Zalifa Zakiah Binti Asnir, Mohamad Sabri bin Sinal Zainal, Goh Man Fye,
- Abstract summary: FundaQ-8 is an expert-validated framework for systematically assessing fundus image quality.<n>We develop a ResNet18-based regression model to predict continuous quality scores in the 0 to 1 range.<n>validation against the EyeQ dataset and statistical analyses confirm the framework's reliability and clinical interpretability.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automated fundus image quality assessment (FIQA) remains a challenge due to variations in image acquisition and subjective expert evaluations. We introduce FundaQ-8, a novel expert-validated framework for systematically assessing fundus image quality using eight critical parameters, including field coverage, anatomical visibility, illumination, and image artifacts. Using FundaQ-8 as a structured scoring reference, we develop a ResNet18-based regression model to predict continuous quality scores in the 0 to 1 range. The model is trained on 1800 fundus images from real-world clinical sources and Kaggle datasets, using transfer learning, mean squared error optimization, and standardized preprocessing. Validation against the EyeQ dataset and statistical analyses confirm the framework's reliability and clinical interpretability. Incorporating FundaQ-8 into deep learning models for diabetic retinopathy grading also improves diagnostic robustness, highlighting the value of quality-aware training in real-world screening applications.
Related papers
- Metrics that matter: Evaluating image quality metrics for medical image generation [48.85783422900129]
This study comprehensively assesses commonly used no-reference image quality metrics using brain MRI data.<n>We evaluate metric sensitivity to a range of challenges, including noise, distribution shifts, and, critically, morphological alterations designed to mimic clinically relevant inaccuracies.
arXiv Detail & Related papers (2025-05-12T01:57:25Z) - A No-Reference Medical Image Quality Assessment Method Based on Automated Distortion Recognition Technology: Application to Preprocessing in MRI-guided Radiotherapy [9.332679162161428]
We analyzed 106,000 MR images from 10 patients with liver metastasis.<n>Our No-Reference Quality Assessment Model includes:1)image preprocessing to enhance visibility of key diagnostic features.<n>The tumor tracking algorithm confirmed significant tracking accuracy improvements with preprocessed images.
arXiv Detail & Related papers (2024-12-09T15:48:16Z) - FGR-Net:Interpretable fundus imagegradeability classification based on deepreconstruction learning [4.377496499420086]
This paper presents a novel framework called FGR-Net to automatically assess and interpret underlying fundus image quality.
The FGR-Net model also provides an interpretable quality assessment through visualizations.
The experimental results showed the superiority of FGR-Net over the state-of-the-art quality assessment methods, with an accuracy of 89% and an F1-score of 87%.
arXiv Detail & Related papers (2024-09-16T12:56:23Z) - DP-IQA: Utilizing Diffusion Prior for Blind Image Quality Assessment in the Wild [73.6767681305851]
Blind image quality assessment (IQA) in the wild presents significant challenges.<n>Given the difficulty in collecting large-scale training data, leveraging limited data to develop a model with strong generalization remains an open problem.<n>Motivated by the robust image perception capabilities of pre-trained text-to-image (T2I) diffusion models, we propose a novel IQA method, diffusion priors-based IQA.
arXiv Detail & Related papers (2024-05-30T12:32:35Z) - Multi-Modal Prompt Learning on Blind Image Quality Assessment [65.0676908930946]
Image Quality Assessment (IQA) models benefit significantly from semantic information, which allows them to treat different types of objects distinctly.
Traditional methods, hindered by a lack of sufficiently annotated data, have employed the CLIP image-text pretraining model as their backbone to gain semantic awareness.
Recent approaches have attempted to address this mismatch using prompt technology, but these solutions have shortcomings.
This paper introduces an innovative multi-modal prompt-based methodology for IQA.
arXiv Detail & Related papers (2024-04-23T11:45:32Z) - Feature Denoising Diffusion Model for Blind Image Quality Assessment [58.5808754919597]
Blind Image Quality Assessment (BIQA) aims to evaluate image quality in line with human perception, without reference benchmarks.
Deep learning BIQA methods typically depend on using features from high-level tasks for transfer learning.
In this paper, we take an initial step towards exploring the diffusion model for feature denoising in BIQA.
arXiv Detail & Related papers (2024-01-22T13:38:24Z) - Self-supervised Domain Adaptation for Breaking the Limits of Low-quality
Fundus Image Quality Enhancement [14.677912534121273]
Low-quality fundus images and style inconsistency potentially increase uncertainty in the diagnosis of fundus disease.
We formulate two self-supervised domain adaptation tasks to disentangle the features of image content, low-quality factor and style information.
Our DASQE method achieves new state-of-the-art performance when only low-quality images are available.
arXiv Detail & Related papers (2023-01-17T15:07:20Z) - Automated Assessment of Transthoracic Echocardiogram Image Quality Using
Deep Neural Networks [2.5922360296344396]
Quality of acquired images are highly dependent on operator skills and are assessed subjectively.
This study is aimed at providing an objective assessment pipeline for echocardiogram image quality by defining a new set of domain-specific quality indicators.
arXiv Detail & Related papers (2022-09-02T12:15:14Z) - FundusQ-Net: a Regression Quality Assessment Deep Learning Algorithm for
Fundus Images Quality Grading [0.0]
Glaucoma, diabetic retinopathy and age-related macular degeneration are major causes of blindness and vision impairment.
Key step in this process is to automatically estimate the quality of the fundus images to make sure these are interpretable by a human operator or a machine learning model.
We present a novel fundus image quality scale and deep learning (DL) model that can estimate fundus image quality relative to this new scale.
arXiv Detail & Related papers (2022-05-02T21:01:34Z) - Image Quality Assessment using Contrastive Learning [50.265638572116984]
We train a deep Convolutional Neural Network (CNN) using a contrastive pairwise objective to solve the auxiliary problem.
We show through extensive experiments that CONTRIQUE achieves competitive performance when compared to state-of-the-art NR image quality models.
Our results suggest that powerful quality representations with perceptual relevance can be obtained without requiring large labeled subjective image quality datasets.
arXiv Detail & Related papers (2021-10-25T21:01:00Z) - Uncertainty-Aware Blind Image Quality Assessment in the Laboratory and
Wild [98.48284827503409]
We develop a textitunified BIQA model and an approach of training it for both synthetic and realistic distortions.
We employ the fidelity loss to optimize a deep neural network for BIQA over a large number of such image pairs.
Experiments on six IQA databases show the promise of the learned method in blindly assessing image quality in the laboratory and wild.
arXiv Detail & Related papers (2020-05-28T13:35:23Z) - Modeling and Enhancing Low-quality Retinal Fundus Images [167.02325845822276]
Low-quality fundus images increase uncertainty in clinical observation and lead to the risk of misdiagnosis.
We propose a clinically oriented fundus enhancement network (cofe-Net) to suppress global degradation factors.
Experiments on both synthetic and real images demonstrate that our algorithm effectively corrects low-quality fundus images without losing retinal details.
arXiv Detail & Related papers (2020-05-12T08:01:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.