Generalized Visual Quality Assessment of GAN-Generated Face Images
- URL: http://arxiv.org/abs/2201.11975v1
- Date: Fri, 28 Jan 2022 07:54:49 GMT
- Title: Generalized Visual Quality Assessment of GAN-Generated Face Images
- Authors: Yu Tian and Zhangkai Ni and Baoliang Chen and Shiqi Wang and Hanli
Wang and Sam Kwong
- Abstract summary: We study the subjective and objective quality towards generalized quality assessment of GAN-generated face images (GFIs)
We develop a quality assessment model that is able to deliver accurate quality predictions for GFIs from both available and unseen GAN algorithms.
- Score: 79.47386781978531
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent years have witnessed the dramatically increased interest in face
generation with generative adversarial networks (GANs). A number of successful
GAN algorithms have been developed to produce vivid face images towards
different application scenarios. However, little work has been dedicated to
automatic quality assessment of such GAN-generated face images (GFIs), even
less have been devoted to generalized and robust quality assessment of GFIs
generated with unseen GAN model. Herein, we make the first attempt to study the
subjective and objective quality towards generalized quality assessment of
GFIs. More specifically, we establish a large-scale database consisting of GFIs
from four GAN algorithms, the pseudo labels from image quality assessment (IQA)
measures, as well as the human opinion scores via subjective testing.
Subsequently, we develop a quality assessment model that is able to deliver
accurate quality predictions for GFIs from both available and unseen GAN
algorithms based on meta-learning. In particular, to learn shared knowledge
from GFIs pairs that are born of limited GAN algorithms, we develop the
convolutional block attention (CBA) and facial attributes-based analysis (ABA)
modules, ensuring that the learned knowledge tends to be consistent with human
visual perception. Extensive experiments exhibit that the proposed model
achieves better performance compared with the state-of-the-art IQA models, and
is capable of retaining the effectiveness when evaluating GFIs from the unseen
GAN algorithms.
Related papers
- Opinion-Unaware Blind Image Quality Assessment using Multi-Scale Deep Feature Statistics [54.08757792080732]
We propose integrating deep features from pre-trained visual models with a statistical analysis model to achieve opinion-unaware BIQA (OU-BIQA)
Our proposed model exhibits superior consistency with human visual perception compared to state-of-the-art BIQA models.
arXiv Detail & Related papers (2024-05-29T06:09:34Z) - Understanding and Evaluating Human Preferences for AI Generated Images with Instruction Tuning [58.41087653543607]
We first establish a novel Image Quality Assessment (IQA) database for AIGIs, termed AIGCIQA2023+.
This paper presents a MINT-IQA model to evaluate and explain human preferences for AIGIs from Multi-perspectives with INstruction Tuning.
arXiv Detail & Related papers (2024-05-12T17:45:11Z) - G-Refine: A General Quality Refiner for Text-to-Image Generation [74.16137826891827]
We introduce G-Refine, a general image quality refiner designed to enhance low-quality images without compromising integrity of high-quality ones.
The model is composed of three interconnected modules: a perception quality indicator, an alignment quality indicator, and a general quality enhancement module.
Extensive experimentation reveals that AIGIs after G-Refine outperform in 10+ quality metrics across 4 databases.
arXiv Detail & Related papers (2024-04-29T00:54:38Z) - AGIQA-3K: An Open Database for AI-Generated Image Quality Assessment [62.8834581626703]
We build the most comprehensive subjective quality database AGIQA-3K so far.
We conduct a benchmark experiment on this database to evaluate the consistency between the current Image Quality Assessment (IQA) model and human perception.
We believe that the fine-grained subjective scores in AGIQA-3K will inspire subsequent AGI quality models to fit human subjective perception mechanisms.
arXiv Detail & Related papers (2023-06-07T18:28:21Z) - Compound Frechet Inception Distance for Quality Assessment of GAN
Created Images [7.628527132779575]
One notable application of GANs is developing fake human faces, also known as "deep fakes"
Measuring the quality of the generated images is inherently subjective but attempts to objectify quality using standardized metrics have been made.
We propose to improve the robustness of the evaluation process by integrating lower-level features to cover a wider array of visual defects.
arXiv Detail & Related papers (2021-06-16T06:53:27Z) - Deep Tiny Network for Recognition-Oriented Face Image Quality Assessment [26.792481400792376]
In many face recognition (FR) scenarios, face images are acquired from a sequence with huge intra-variations.
We present an efficient non-reference image quality assessment for FR that directly links image quality assessment (IQA) and FR.
Based on the proposed quality measurement, we propose a deep Tiny Face Quality network (tinyFQnet) to learn a quality prediction function from data.
arXiv Detail & Related papers (2021-06-09T07:20:54Z) - Uncertainty-Aware Blind Image Quality Assessment in the Laboratory and
Wild [98.48284827503409]
We develop a textitunified BIQA model and an approach of training it for both synthetic and realistic distortions.
We employ the fidelity loss to optimize a deep neural network for BIQA over a large number of such image pairs.
Experiments on six IQA databases show the promise of the learned method in blindly assessing image quality in the laboratory and wild.
arXiv Detail & Related papers (2020-05-28T13:35:23Z) - GIQA: Generated Image Quality Assessment [36.01759301994946]
Generative adversarial networks (GANs) have achieved impressive results today, but not all generated images are perfect.
We propose Generated Image Quality Assessment (GIQA), which quantitatively evaluates the quality of each generated image.
arXiv Detail & Related papers (2020-03-19T17:56:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.