ELIQ: A Label-Free Framework for Quality Assessment of Evolving AI-Generated Images
- URL: http://arxiv.org/abs/2602.03558v1
- Date: Tue, 03 Feb 2026 14:04:51 GMT
- Title: ELIQ: A Label-Free Framework for Quality Assessment of Evolving AI-Generated Images
- Authors: Xinyue Li, Zhiming Xu, Zhichao Zhang, Zhaolin Cai, Sijing Wu, Xiongkuo Min, Yitong Chen, Guangtao Zhai,
- Abstract summary: We present ELIQ, a Label-free Framework for Quality Assessment of Evolving AI-generated Images.<n>Specifically, ELIQ focuses on visual quality and prompt-image alignment.<n>It automatically constructs positive and aspect-specific negative pairs to cover both conventional distortions and AIGC-specific distortion modes.
- Score: 76.5101823186747
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generative text-to-image models are advancing at an unprecedented pace, continuously shifting the perceptual quality ceiling and rendering previously collected labels unreliable for newer generations. To address this, we present ELIQ, a Label-free Framework for Quality Assessment of Evolving AI-generated Images. Specifically, ELIQ focuses on visual quality and prompt-image alignment, automatically constructs positive and aspect-specific negative pairs to cover both conventional distortions and AIGC-specific distortion modes, enabling transferable supervision without human annotations. Building on these pairs, ELIQ adapts a pre-trained multimodal model into a quality-aware critic via instruction tuning and predicts two-dimensional quality using lightweight gated fusion and a Quality Query Transformer. Experiments across multiple benchmarks demonstrate that ELIQ consistently outperforms existing label-free methods, generalizes from AI-generated content (AIGC) to user-generated content (UGC) scenarios without modification, and paves the way for scalable and label-free quality assessment under continuously evolving generative models. The code will be released upon publication.
Related papers
- AU-IQA: A Benchmark Dataset for Perceptual Quality Assessment of AI-Enhanced User-Generated Content [43.82962694838953]
AI-based image enhancement techniques have been widely adopted in various visual applications, significantly improving the perceptual quality of user-generated content (UGC)<n>The lack of specialized quality assessment models has become a significant limiting factor in this field, limiting user experience and hindering the advancement of enhancement methods.<n>We construct AU-IQA, a benchmark dataset comprising 4,800 AI-UGC images produced by three representative enhancement types.<n>On this dataset, we evaluate a range of existing quality assessment models, including traditional IQA methods and large multimodal models.
arXiv Detail & Related papers (2025-08-07T03:55:11Z) - TRIQA: Image Quality Assessment by Contrastive Pretraining on Ordered Distortion Triplets [31.2422359004089]
No-Reference (NR) IQA remains particularly challenging due to the absence of a reference image.<n>We propose a novel approach that constructs a custom dataset using a limited number of reference content images.<n>We train a quality-aware model using contrastive triplet-based learning, enabling efficient training with fewer samples.
arXiv Detail & Related papers (2025-07-16T23:43:12Z) - IQPFR: An Image Quality Prior for Blind Face Restoration and Beyond [56.99331967165238]
Blind Face Restoration (BFR) addresses the challenge of reconstructing degraded low-quality (LQ) facial images into high-quality (HQ) outputs.<n>We propose a novel framework that incorporates an Image Quality Prior (IQP) derived from No-Reference Image Quality Assessment (NR-IQA) models.<n>Our method outperforms state-of-the-art techniques across multiple benchmarks.
arXiv Detail & Related papers (2025-03-12T11:39:51Z) - IQA-Adapter: Exploring Knowledge Transfer from Image Quality Assessment to Diffusion-based Generative Models [0.5356944479760104]
We propose methods to integrate image quality assessment (IQA) models into diffusion-based generators.<n>We show that diffusion models can learn complex qualitative relationships from both IQA models' outputs and internal activations.<n>We introduce IQA-Adapter, a novel framework that conditions generation on target quality levels by learning the implicit relationship between images and quality scores.
arXiv Detail & Related papers (2024-12-02T18:40:19Z) - Few-Shot Image Quality Assessment via Adaptation of Vision-Language Models [93.91086467402323]
Gradient-Regulated Meta-Prompt IQA Framework (GRMP-IQA) designed to efficiently adapt the visual-language pre-trained model, CLIP, to IQA tasks.<n> GRMP-IQA consists of two core modules: (i) Meta-Prompt Pre-training Module and (ii) Quality-Aware Gradient Regularization.
arXiv Detail & Related papers (2024-09-09T07:26:21Z) - G-Refine: A General Quality Refiner for Text-to-Image Generation [74.16137826891827]
We introduce G-Refine, a general image quality refiner designed to enhance low-quality images without compromising integrity of high-quality ones.
The model is composed of three interconnected modules: a perception quality indicator, an alignment quality indicator, and a general quality enhancement module.
Extensive experimentation reveals that AIGIs after G-Refine outperform in 10+ quality metrics across 4 databases.
arXiv Detail & Related papers (2024-04-29T00:54:38Z) - Large Multi-modality Model Assisted AI-Generated Image Quality Assessment [53.182136445844904]
We introduce a large Multi-modality model Assisted AI-Generated Image Quality Assessment (MA-AGIQA) model.
It uses semantically informed guidance to sense semantic information and extract semantic vectors through carefully designed text prompts.
It achieves state-of-the-art performance, and demonstrates its superior generalization capabilities on assessing the quality of AI-generated images.
arXiv Detail & Related papers (2024-04-27T02:40:36Z) - Q-Refine: A Perceptual Quality Refiner for AI-Generated Image [85.89840673640028]
A quality-award refiner named Q-Refine is proposed.
It uses the Image Quality Assessment (IQA) metric to guide the refining process for the first time.
It can be a general refiner to optimize AIGIs from both fidelity and aesthetic quality levels.
arXiv Detail & Related papers (2024-01-02T09:11:23Z) - Generating Adversarial Examples with an Optimized Quality [12.747258403133035]
Deep learning models are vulnerable to Adversarial Examples (AEs),carefully crafted samples to deceive those models.
Recent studies have introduced new adversarial attack methods, but none provided guaranteed quality for the crafted examples.
In this paper, we incorporateImage Quality Assessment (IQA) metrics into the design and generation process of AEs.
arXiv Detail & Related papers (2020-06-30T23:05:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.