Subjective and Objective Quality Assessment for in-the-Wild Computer
Graphics Images
- URL: http://arxiv.org/abs/2303.08050v4
- Date: Wed, 1 Nov 2023 07:18:09 GMT
- Title: Subjective and Objective Quality Assessment for in-the-Wild Computer
Graphics Images
- Authors: Zicheng Zhang, Wei Sun, Yingjie Zhou, Jun Jia, Zhichao Zhang, Jing
Liu, Xiongkuo Min, and Guangtao Zhai
- Abstract summary: We build a large-scale in-the-wild CGIQA database consisting of 6,000 CGIs (CGIQA-6k)
We propose an effective deep learning-based no-reference (NR) IQA model by utilizing both distortion and aesthetic quality representation.
Experimental results show that the proposed method outperforms all other state-of-the-art NR IQA methods on the constructed CGIQA-6k database.
- Score: 57.02760260360728
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Computer graphics images (CGIs) are artificially generated by means of
computer programs and are widely perceived under various scenarios, such as
games, streaming media, etc. In practice, the quality of CGIs consistently
suffers from poor rendering during production, inevitable compression artifacts
during the transmission of multimedia applications, and low aesthetic quality
resulting from poor composition and design. However, few works have been
dedicated to dealing with the challenge of computer graphics image quality
assessment (CGIQA). Most image quality assessment (IQA) metrics are developed
for natural scene images (NSIs) and validated on databases consisting of NSIs
with synthetic distortions, which are not suitable for in-the-wild CGIs. To
bridge the gap between evaluating the quality of NSIs and CGIs, we construct a
large-scale in-the-wild CGIQA database consisting of 6,000 CGIs (CGIQA-6k) and
carry out the subjective experiment in a well-controlled laboratory environment
to obtain the accurate perceptual ratings of the CGIs. Then, we propose an
effective deep learning-based no-reference (NR) IQA model by utilizing both
distortion and aesthetic quality representation. Experimental results show that
the proposed method outperforms all other state-of-the-art NR IQA methods on
the constructed CGIQA-6k database and other CGIQA-related databases. The
database is released at https://github.com/zzc-1998/CGIQA6K.
Related papers
- PKU-AIGIQA-4K: A Perceptual Quality Assessment Database for Both Text-to-Image and Image-to-Image AI-Generated Images [1.5265677582796984]
We establish a large scale perceptual quality assessment database for both text-to-image and image-to-image AIGIs, named PKU-AIGIQA-4K.
We propose three image quality assessment (IQA) methods based on pre-trained models that include a no-reference method NR-AIGCIQA, a full-reference method FR-AIGCIQA, and a partial-reference method PR-AIGCIQA.
arXiv Detail & Related papers (2024-04-29T03:57:43Z) - Large Multi-modality Model Assisted AI-Generated Image Quality Assessment [53.182136445844904]
We introduce a large Multi-modality model Assisted AI-Generated Image Quality Assessment (MA-AGIQA) model.
It uses semantically informed guidance to sense semantic information and extract semantic vectors through carefully designed text prompts.
It achieves state-of-the-art performance, and demonstrates its superior generalization capabilities on assessing the quality of AI-generated images.
arXiv Detail & Related papers (2024-04-27T02:40:36Z) - PKU-I2IQA: An Image-to-Image Quality Assessment Database for AI
Generated Images [1.6031185986328562]
We establish a human perception-based image-to-image AIGCIQA database, named PKU-I2IQA.
We propose two benchmark models: NR-AIGCIQA based on the no-reference image quality assessment method and FR-AIGCIQA based on the full-reference image quality assessment method.
arXiv Detail & Related papers (2023-11-27T05:53:03Z) - AGIQA-3K: An Open Database for AI-Generated Image Quality Assessment [62.8834581626703]
We build the most comprehensive subjective quality database AGIQA-3K so far.
We conduct a benchmark experiment on this database to evaluate the consistency between the current Image Quality Assessment (IQA) model and human perception.
We believe that the fine-grained subjective scores in AGIQA-3K will inspire subsequent AGI quality models to fit human subjective perception mechanisms.
arXiv Detail & Related papers (2023-06-07T18:28:21Z) - A Perceptual Quality Assessment Exploration for AIGC Images [39.72512063793346]
In this paper, we discuss the major evaluation aspects such as technical issues, AI artifacts, unnaturalness, discrepancy, and aesthetics for AGI quality assessment.
We present the first perceptual AGI quality assessment database, AGIQA-1K, which consists of 1,080 AGIs generated from diffusion models.
arXiv Detail & Related papers (2023-03-22T14:59:49Z) - Subjective Quality Assessment for Images Generated by Computer Graphics [40.86516321054218]
Computer graphics generated images (CGIs) have been widely used in practical application scenarios such as architecture design, video games, simulators, movies, etc.
Some CGIs may also suffer from compression distortions in transmission systems like cloud gaming and stream media.
We establish a large-scale subjective CG-IQA database to deal with the challenge of CG-IQA tasks.
arXiv Detail & Related papers (2022-06-10T11:48:24Z) - Confusing Image Quality Assessment: Towards Better Augmented Reality
Experience [96.29124666702566]
We consider AR technology as the superimposition of virtual scenes and real scenes, and introduce visual confusion as its basic theory.
A ConFusing Image Quality Assessment (CFIQA) database is established, which includes 600 reference images and 300 distorted images generated by mixing reference images in pairs.
An objective metric termed CFIQA is also proposed to better evaluate the confusing image quality.
arXiv Detail & Related papers (2022-04-11T07:03:06Z) - Uncertainty-Aware Blind Image Quality Assessment in the Laboratory and
Wild [98.48284827503409]
We develop a textitunified BIQA model and an approach of training it for both synthetic and realistic distortions.
We employ the fidelity loss to optimize a deep neural network for BIQA over a large number of such image pairs.
Experiments on six IQA databases show the promise of the learned method in blindly assessing image quality in the laboratory and wild.
arXiv Detail & Related papers (2020-05-28T13:35:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.