UNO-QA: An Unsupervised Anomaly-Aware Framework with Test-Time
Clustering for OCTA Image Quality Assessment
- URL: http://arxiv.org/abs/2212.10541v1
- Date: Tue, 20 Dec 2022 18:48:04 GMT
- Title: UNO-QA: An Unsupervised Anomaly-Aware Framework with Test-Time
Clustering for OCTA Image Quality Assessment
- Authors: Juntao Chen, Li Lin, Pujin Cheng, Yijin Huang, Xiaoying Tang
- Abstract summary: We propose an unsupervised anomaly-aware framework with test-time clustering for optical coherence tomography angiography ( OCTA) image quality assessment.
A feature-embedding-based low-quality representation module is proposed to quantify the quality of OCTA images.
We perform dimension reduction and clustering of multi-scale image features extracted by the trained OCTA quality representation network.
- Score: 4.901218498977952
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Medical image quality assessment (MIQA) is a vital prerequisite in various
medical image analysis applications. Most existing MIQA algorithms are fully
supervised that request a large amount of annotated data. However, annotating
medical images is time-consuming and labor-intensive. In this paper, we propose
an unsupervised anomaly-aware framework with test-time clustering for optical
coherence tomography angiography (OCTA) image quality assessment in a setting
wherein only a set of high-quality samples are accessible in the training
phase. Specifically, a feature-embedding-based low-quality representation
module is proposed to quantify the quality of OCTA images and then to
discriminate between outstanding quality and non-outstanding quality. Within
the non-outstanding quality class, to further distinguish gradable images from
ungradable ones, we perform dimension reduction and clustering of multi-scale
image features extracted by the trained OCTA quality representation network.
Extensive experiments are conducted on one publicly accessible dataset
sOCTA-3*3-10k, with superiority of our proposed framework being successfully
established.
Related papers
- Q-Ground: Image Quality Grounding with Large Multi-modality Models [61.72022069880346]
We introduce Q-Ground, the first framework aimed at tackling fine-scale visual quality grounding.
Q-Ground combines large multi-modality models with detailed visual quality analysis.
Central to our contribution is the introduction of the QGround-100K dataset.
arXiv Detail & Related papers (2024-07-24T06:42:46Z) - Contrastive Pre-Training with Multi-View Fusion for No-Reference Point Cloud Quality Assessment [49.36799270585947]
No-reference point cloud quality assessment (NR-PCQA) aims to automatically evaluate the perceptual quality of distorted point clouds without available reference.
We propose a novel contrastive pre-training framework tailored for PCQA (CoPA)
Our method outperforms the state-of-the-art PCQA methods on popular benchmarks.
arXiv Detail & Related papers (2024-03-15T07:16:07Z) - Pairwise Comparisons Are All You Need [22.798716660911833]
Blind image quality assessment (BIQA) approaches often fall short in real-world scenarios due to their reliance on a generic quality standard applied uniformly across diverse images.
This paper introduces PICNIQ, a pairwise comparison framework designed to bypass the limitations of conventional BIQA.
By employing psychometric scaling algorithms, PICNIQ transforms pairwise comparisons into just-objectionable-difference (JOD) quality scores, offering a granular and interpretable measure of image quality.
arXiv Detail & Related papers (2024-03-13T23:43:36Z) - Adaptive Feature Selection for No-Reference Image Quality Assessment by Mitigating Semantic Noise Sensitivity [55.399230250413986]
We propose a Quality-Aware Feature Matching IQA Metric (QFM-IQM) to remove harmful semantic noise features from the upstream task.
Our approach achieves superior performance to the state-of-the-art NR-IQA methods on eight standard IQA datasets.
arXiv Detail & Related papers (2023-12-11T06:50:27Z) - MD-IQA: Learning Multi-scale Distributed Image Quality Assessment with
Semi Supervised Learning for Low Dose CT [6.158876574189994]
Image quality assessment (IQA) plays a critical role in optimizing radiation dose and developing novel medical imaging techniques.
Recent deep learning-based approaches have demonstrated strong modeling capabilities and potential for medical IQA.
We propose a multi-scale distributions regression approach to predict quality scores by constraining the output distribution.
arXiv Detail & Related papers (2023-11-14T09:33:33Z) - Blind Multimodal Quality Assessment: A Brief Survey and A Case Study of
Low-light Images [73.27643795557778]
Blind image quality assessment (BIQA) aims at automatically and accurately forecasting objective scores for visual signals.
Recent developments in this field are dominated by unimodal solutions inconsistent with human subjective rating patterns.
We present a unique blind multimodal quality assessment (BMQA) of low-light images from subjective evaluation to objective score.
arXiv Detail & Related papers (2023-03-18T09:04:55Z) - Self-supervised Domain Adaptation for Breaking the Limits of Low-quality
Fundus Image Quality Enhancement [14.677912534121273]
Low-quality fundus images and style inconsistency potentially increase uncertainty in the diagnosis of fundus disease.
We formulate two self-supervised domain adaptation tasks to disentangle the features of image content, low-quality factor and style information.
Our DASQE method achieves new state-of-the-art performance when only low-quality images are available.
arXiv Detail & Related papers (2023-01-17T15:07:20Z) - Related Work on Image Quality Assessment [0.103341388090561]
Image quality assessment (IQA) plays a vital role in image-based applications.
This article will review the state-of-the-art image quality assessment algorithms.
arXiv Detail & Related papers (2021-11-11T16:11:27Z) - Image Quality Assessment using Contrastive Learning [50.265638572116984]
We train a deep Convolutional Neural Network (CNN) using a contrastive pairwise objective to solve the auxiliary problem.
We show through extensive experiments that CONTRIQUE achieves competitive performance when compared to state-of-the-art NR image quality models.
Our results suggest that powerful quality representations with perceptual relevance can be obtained without requiring large labeled subjective image quality datasets.
arXiv Detail & Related papers (2021-10-25T21:01:00Z) - Learning Conditional Knowledge Distillation for Degraded-Reference Image
Quality Assessment [157.1292674649519]
We propose a practical solution named degraded-reference IQA (DR-IQA)
DR-IQA exploits the inputs of IR models, degraded images, as references.
Our results can even be close to the performance of full-reference settings.
arXiv Detail & Related papers (2021-08-18T02:35:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.