GIQA: Generated Image Quality Assessment
- URL: http://arxiv.org/abs/2003.08932v3
- Date: Tue, 14 Jul 2020 09:57:48 GMT
- Title: GIQA: Generated Image Quality Assessment
- Authors: Shuyang Gu, Jianmin Bao, Dong Chen, Fang Wen
- Abstract summary: Generative adversarial networks (GANs) have achieved impressive results today, but not all generated images are perfect.
We propose Generated Image Quality Assessment (GIQA), which quantitatively evaluates the quality of each generated image.
- Score: 36.01759301994946
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative adversarial networks (GANs) have achieved impressive results
today, but not all generated images are perfect. A number of quantitative
criteria have recently emerged for generative model, but none of them are
designed for a single generated image. In this paper, we propose a new research
topic, Generated Image Quality Assessment (GIQA), which quantitatively
evaluates the quality of each generated image. We introduce three GIQA
algorithms from two perspectives: learning-based and data-based. We evaluate a
number of images generated by various recent GAN models on different datasets
and demonstrate that they are consistent with human assessments. Furthermore,
GIQA is available to many applications, like separately evaluating the realism
and diversity of generative models, and enabling online hard negative mining
(OHEM) in the training of GANs to improve the results.
Related papers
- CLIP-AGIQA: Boosting the Performance of AI-Generated Image Quality Assessment with CLIP [5.983562693055378]
We develop CLIP-AGIQA, a CLIP-based regression model for quality assessment of generated images.
We implement multi-category learnable prompts to fully utilize the textual knowledge in CLIP for quality assessment.
arXiv Detail & Related papers (2024-08-27T14:30:36Z) - Opinion-Unaware Blind Image Quality Assessment using Multi-Scale Deep Feature Statistics [54.08757792080732]
We propose integrating deep features from pre-trained visual models with a statistical analysis model to achieve opinion-unaware BIQA (OU-BIQA)
Our proposed model exhibits superior consistency with human visual perception compared to state-of-the-art BIQA models.
arXiv Detail & Related papers (2024-05-29T06:09:34Z) - AGIQA-3K: An Open Database for AI-Generated Image Quality Assessment [62.8834581626703]
We build the most comprehensive subjective quality database AGIQA-3K so far.
We conduct a benchmark experiment on this database to evaluate the consistency between the current Image Quality Assessment (IQA) model and human perception.
We believe that the fine-grained subjective scores in AGIQA-3K will inspire subsequent AGI quality models to fit human subjective perception mechanisms.
arXiv Detail & Related papers (2023-06-07T18:28:21Z) - Conformer and Blind Noisy Students for Improved Image Quality Assessment [80.57006406834466]
Learning-based approaches for perceptual image quality assessment (IQA) usually require both the distorted and reference image for measuring the perceptual quality accurately.
In this work, we explore the performance of transformer-based full-reference IQA models.
We also propose a method for IQA based on semi-supervised knowledge distillation from full-reference teacher models into blind student models.
arXiv Detail & Related papers (2022-04-27T10:21:08Z) - Generalized Visual Quality Assessment of GAN-Generated Face Images [79.47386781978531]
We study the subjective and objective quality towards generalized quality assessment of GAN-generated face images (GFIs)
We develop a quality assessment model that is able to deliver accurate quality predictions for GFIs from both available and unseen GAN algorithms.
arXiv Detail & Related papers (2022-01-28T07:54:49Z) - Collaging Class-specific GANs for Semantic Image Synthesis [68.87294033259417]
We propose a new approach for high resolution semantic image synthesis.
It consists of one base image generator and multiple class-specific generators.
Experiments show that our approach can generate high quality images in high resolution.
arXiv Detail & Related papers (2021-10-08T17:46:56Z) - Compound Frechet Inception Distance for Quality Assessment of GAN
Created Images [7.628527132779575]
One notable application of GANs is developing fake human faces, also known as "deep fakes"
Measuring the quality of the generated images is inherently subjective but attempts to objectify quality using standardized metrics have been made.
We propose to improve the robustness of the evaluation process by integrating lower-level features to cover a wider array of visual defects.
arXiv Detail & Related papers (2021-06-16T06:53:27Z) - Image Synthesis with Adversarial Networks: a Comprehensive Survey and
Case Studies [41.00383742615389]
Generative Adversarial Networks (GANs) have been extremely successful in various application domains such as computer vision, medicine, and natural language processing.
GANs are powerful models for learning complex distributions to synthesize semantically meaningful samples.
Given the current fast GANs development, in this survey, we provide a comprehensive review of adversarial models for image synthesis.
arXiv Detail & Related papers (2020-12-26T13:30:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.