Generalized Visual Quality Assessment of GAN-Generated Face Images
- URL: http://arxiv.org/abs/2201.11975v1
- Date: Fri, 28 Jan 2022 07:54:49 GMT
- Title: Generalized Visual Quality Assessment of GAN-Generated Face Images
- Authors: Yu Tian and Zhangkai Ni and Baoliang Chen and Shiqi Wang and Hanli
Wang and Sam Kwong
- Abstract summary: We study the subjective and objective quality towards generalized quality assessment of GAN-generated face images (GFIs)
We develop a quality assessment model that is able to deliver accurate quality predictions for GFIs from both available and unseen GAN algorithms.
- Score: 79.47386781978531
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent years have witnessed the dramatically increased interest in face
generation with generative adversarial networks (GANs). A number of successful
GAN algorithms have been developed to produce vivid face images towards
different application scenarios. However, little work has been dedicated to
automatic quality assessment of such GAN-generated face images (GFIs), even
less have been devoted to generalized and robust quality assessment of GFIs
generated with unseen GAN model. Herein, we make the first attempt to study the
subjective and objective quality towards generalized quality assessment of
GFIs. More specifically, we establish a large-scale database consisting of GFIs
from four GAN algorithms, the pseudo labels from image quality assessment (IQA)
measures, as well as the human opinion scores via subjective testing.
Subsequently, we develop a quality assessment model that is able to deliver
accurate quality predictions for GFIs from both available and unseen GAN
algorithms based on meta-learning. In particular, to learn shared knowledge
from GFIs pairs that are born of limited GAN algorithms, we develop the
convolutional block attention (CBA) and facial attributes-based analysis (ABA)
modules, ensuring that the learned knowledge tends to be consistent with human
visual perception. Extensive experiments exhibit that the proposed model
achieves better performance compared with the state-of-the-art IQA models, and
is capable of retaining the effectiveness when evaluating GFIs from the unseen
GAN algorithms.
Related papers
- AU-IQA: A Benchmark Dataset for Perceptual Quality Assessment of AI-Enhanced User-Generated Content [43.82962694838953]
AI-based image enhancement techniques have been widely adopted in various visual applications, significantly improving the perceptual quality of user-generated content (UGC)<n>The lack of specialized quality assessment models has become a significant limiting factor in this field, limiting user experience and hindering the advancement of enhancement methods.<n>We construct AU-IQA, a benchmark dataset comprising 4,800 AI-UGC images produced by three representative enhancement types.<n>On this dataset, we evaluate a range of existing quality assessment models, including traditional IQA methods and large multimodal models.
arXiv Detail & Related papers (2025-08-07T03:55:11Z) - Enhancing Underwater Images Using Deep Learning with Subjective Image Quality Integration [0.8287206589886879]
This paper presents a deep learning-based approach to improving underwater image quality.<n>We use publicly available datasets containing underwater images labeled by experts as either high or low quality.<n>Results demonstrate that the proposed model achieves substantial improvements in both perceived and measured image quality.
arXiv Detail & Related papers (2025-07-07T18:25:13Z) - Quality Assessment and Distortion-aware Saliency Prediction for AI-Generated Omnidirectional Images [70.49595920462579]
This work studies the quality assessment and distortion-aware saliency prediction problems for AIGODIs.<n>We propose two models with shared encoders based on the BLIP-2 model to evaluate the human visual experience and predict distortion-aware saliency for AI-generated omnidirectional images.
arXiv Detail & Related papers (2025-06-27T05:36:04Z) - AGHI-QA: A Subjective-Aligned Dataset and Metric for AI-Generated Human Images [58.87047247313503]
We introduce AGHI-QA, the first large-scale benchmark specifically designed for quality assessment of human images (AGHIs)
The dataset comprises 4,000 images generated from 400 carefully crafted text prompts using 10 state-of-the-art T2I models.
We conduct a systematic subjective study to collect multidimensional annotations, including perceptual quality scores, text-image correspondence scores, visible and distorted body part labels.
arXiv Detail & Related papers (2025-04-30T04:36:56Z) - Opinion-Unaware Blind Image Quality Assessment using Multi-Scale Deep Feature Statistics [54.08757792080732]
We propose integrating deep features from pre-trained visual models with a statistical analysis model to achieve opinion-unaware BIQA (OU-BIQA)
Our proposed model exhibits superior consistency with human visual perception compared to state-of-the-art BIQA models.
arXiv Detail & Related papers (2024-05-29T06:09:34Z) - Understanding and Evaluating Human Preferences for AI Generated Images with Instruction Tuning [58.41087653543607]
We first establish a novel Image Quality Assessment (IQA) database for AIGIs, termed AIGCIQA2023+.
This paper presents a MINT-IQA model to evaluate and explain human preferences for AIGIs from Multi-perspectives with INstruction Tuning.
arXiv Detail & Related papers (2024-05-12T17:45:11Z) - G-Refine: A General Quality Refiner for Text-to-Image Generation [74.16137826891827]
We introduce G-Refine, a general image quality refiner designed to enhance low-quality images without compromising integrity of high-quality ones.
The model is composed of three interconnected modules: a perception quality indicator, an alignment quality indicator, and a general quality enhancement module.
Extensive experimentation reveals that AIGIs after G-Refine outperform in 10+ quality metrics across 4 databases.
arXiv Detail & Related papers (2024-04-29T00:54:38Z) - AGIQA-3K: An Open Database for AI-Generated Image Quality Assessment [62.8834581626703]
We build the most comprehensive subjective quality database AGIQA-3K so far.
We conduct a benchmark experiment on this database to evaluate the consistency between the current Image Quality Assessment (IQA) model and human perception.
We believe that the fine-grained subjective scores in AGIQA-3K will inspire subsequent AGI quality models to fit human subjective perception mechanisms.
arXiv Detail & Related papers (2023-06-07T18:28:21Z) - Compound Frechet Inception Distance for Quality Assessment of GAN
Created Images [7.628527132779575]
One notable application of GANs is developing fake human faces, also known as "deep fakes"
Measuring the quality of the generated images is inherently subjective but attempts to objectify quality using standardized metrics have been made.
We propose to improve the robustness of the evaluation process by integrating lower-level features to cover a wider array of visual defects.
arXiv Detail & Related papers (2021-06-16T06:53:27Z) - Deep Tiny Network for Recognition-Oriented Face Image Quality Assessment [26.792481400792376]
In many face recognition (FR) scenarios, face images are acquired from a sequence with huge intra-variations.
We present an efficient non-reference image quality assessment for FR that directly links image quality assessment (IQA) and FR.
Based on the proposed quality measurement, we propose a deep Tiny Face Quality network (tinyFQnet) to learn a quality prediction function from data.
arXiv Detail & Related papers (2021-06-09T07:20:54Z) - Uncertainty-Aware Blind Image Quality Assessment in the Laboratory and
Wild [98.48284827503409]
We develop a textitunified BIQA model and an approach of training it for both synthetic and realistic distortions.
We employ the fidelity loss to optimize a deep neural network for BIQA over a large number of such image pairs.
Experiments on six IQA databases show the promise of the learned method in blindly assessing image quality in the laboratory and wild.
arXiv Detail & Related papers (2020-05-28T13:35:23Z) - GIQA: Generated Image Quality Assessment [36.01759301994946]
Generative adversarial networks (GANs) have achieved impressive results today, but not all generated images are perfect.
We propose Generated Image Quality Assessment (GIQA), which quantitatively evaluates the quality of each generated image.
arXiv Detail & Related papers (2020-03-19T17:56:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.