PKU-I2IQA: An Image-to-Image Quality Assessment Database for AI
Generated Images
- URL: http://arxiv.org/abs/2311.15556v2
- Date: Wed, 29 Nov 2023 14:16:08 GMT
- Title: PKU-I2IQA: An Image-to-Image Quality Assessment Database for AI
Generated Images
- Authors: Jiquan Yuan, Xinyan Cao, Changjin Li, Fanyi Yang, Jinlong Lin, and
Xixin Cao
- Abstract summary: We establish a human perception-based image-to-image AIGCIQA database, named PKU-I2IQA.
We propose two benchmark models: NR-AIGCIQA based on the no-reference image quality assessment method and FR-AIGCIQA based on the full-reference image quality assessment method.
- Score: 1.6031185986328562
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As image generation technology advances, AI-based image generation has been
applied in various fields and Artificial Intelligence Generated Content (AIGC)
has garnered widespread attention. However, the development of AI-based image
generative models also brings new problems and challenges. A significant
challenge is that AI-generated images (AIGI) may exhibit unique distortions
compared to natural images, and not all generated images meet the requirements
of the real world. Therefore, it is of great significance to evaluate AIGIs
more comprehensively. Although previous work has established several human
perception-based AIGC image quality assessment (AIGCIQA) databases for
text-generated images, the AI image generation technology includes scenarios
like text-to-image and image-to-image, and assessing only the images generated
by text-to-image models is insufficient. To address this issue, we establish a
human perception-based image-to-image AIGCIQA database, named PKU-I2IQA. We
conduct a well-organized subjective experiment to collect quality labels for
AIGIs and then conduct a comprehensive analysis of the PKU-I2IQA database.
Furthermore, we have proposed two benchmark models: NR-AIGCIQA based on the
no-reference image quality assessment method and FR-AIGCIQA based on the
full-reference image quality assessment method. Finally, leveraging this
database, we conduct benchmark experiments and compare the performance of the
proposed benchmark models. The PKU-I2IQA database and benchmarks will be
released to facilitate future research on
\url{https://github.com/jiquan123/I2IQA}.
Related papers
- Understanding and Evaluating Human Preferences for AI Generated Images with Instruction Tuning [58.41087653543607]
We first establish a novel Image Quality Assessment (IQA) database for AIGIs, termed AIGCIQA2023+.
This paper presents a MINT-IQA model to evaluate and explain human preferences for AIGIs from Multi-perspectives with INstruction Tuning.
arXiv Detail & Related papers (2024-05-12T17:45:11Z) - PKU-AIGIQA-4K: A Perceptual Quality Assessment Database for Both Text-to-Image and Image-to-Image AI-Generated Images [1.5265677582796984]
We establish a large scale perceptual quality assessment database for both text-to-image and image-to-image AIGIs, named PKU-AIGIQA-4K.
We propose three image quality assessment (IQA) methods based on pre-trained models that include a no-reference method NR-AIGCIQA, a full-reference method FR-AIGCIQA, and a partial-reference method PR-AIGCIQA.
arXiv Detail & Related papers (2024-04-29T03:57:43Z) - Large Multi-modality Model Assisted AI-Generated Image Quality Assessment [53.182136445844904]
We introduce a large Multi-modality model Assisted AI-Generated Image Quality Assessment (MA-AGIQA) model.
It uses semantically informed guidance to sense semantic information and extract semantic vectors through carefully designed text prompts.
It achieves state-of-the-art performance, and demonstrates its superior generalization capabilities on assessing the quality of AI-generated images.
arXiv Detail & Related papers (2024-04-27T02:40:36Z) - AIGIQA-20K: A Large Database for AI-Generated Image Quality Assessment [54.93996119324928]
We create the largest AIGI subjective quality database to date with 20,000 AIGIs and 420,000 subjective ratings, known as AIGIQA-20K.
We conduct benchmark experiments on this database to assess the correspondence between 16 mainstream AIGI quality models and human perception.
arXiv Detail & Related papers (2024-04-04T12:12:24Z) - AIGCOIQA2024: Perceptual Quality Assessment of AI Generated Omnidirectional Images [70.42666704072964]
We establish a large-scale AI generated omnidirectional image IQA database named AIGCOIQA2024.
A subjective IQA experiment is conducted to assess human visual preferences from three perspectives.
We conduct a benchmark experiment to evaluate the performance of state-of-the-art IQA models on our database.
arXiv Detail & Related papers (2024-04-01T10:08:23Z) - TIER: Text-Image Encoder-based Regression for AIGC Image Quality
Assessment [2.59079758388817]
In AIGCIQA tasks, images are typically generated by generative models using text prompts.
Most existing AIGCIQA methods regress predicted scores directly from individual generated images.
We propose a text-image encoder-based regression (TIER) framework to address this issue.
arXiv Detail & Related papers (2024-01-08T12:35:15Z) - PSCR: Patches Sampling-based Contrastive Regression for AIGC Image
Quality Assessment [1.1744028458220428]
We propose a contrastive regression framework to leverage differences among various generated images for learning a better representation space.
We conduct extensive experiments on three mainstream AIGCIQA databases including AGIQA-1K, AGIQA-3K and AIGCIQA2023.
Results show significant improvements in model performance with the introduction of our proposed PSCR framework.
arXiv Detail & Related papers (2023-12-10T14:18:53Z) - AGIQA-3K: An Open Database for AI-Generated Image Quality Assessment [62.8834581626703]
We build the most comprehensive subjective quality database AGIQA-3K so far.
We conduct a benchmark experiment on this database to evaluate the consistency between the current Image Quality Assessment (IQA) model and human perception.
We believe that the fine-grained subjective scores in AGIQA-3K will inspire subsequent AGI quality models to fit human subjective perception mechanisms.
arXiv Detail & Related papers (2023-06-07T18:28:21Z) - Confusing Image Quality Assessment: Towards Better Augmented Reality
Experience [96.29124666702566]
We consider AR technology as the superimposition of virtual scenes and real scenes, and introduce visual confusion as its basic theory.
A ConFusing Image Quality Assessment (CFIQA) database is established, which includes 600 reference images and 300 distorted images generated by mixing reference images in pairs.
An objective metric termed CFIQA is also proposed to better evaluate the confusing image quality.
arXiv Detail & Related papers (2022-04-11T07:03:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.