Subjective Quality Assessment for Images Generated by Computer Graphics
- URL: http://arxiv.org/abs/2206.05008v1
- Date: Fri, 10 Jun 2022 11:48:24 GMT
- Title: Subjective Quality Assessment for Images Generated by Computer Graphics
- Authors: Tao Wang, Zicheng Zhang, Wei Sun, Xiongkuo Min, Wei Lu, Guangtao Zhai
- Abstract summary: Computer graphics generated images (CGIs) have been widely used in practical application scenarios such as architecture design, video games, simulators, movies, etc.
Some CGIs may also suffer from compression distortions in transmission systems like cloud gaming and stream media.
We establish a large-scale subjective CG-IQA database to deal with the challenge of CG-IQA tasks.
- Score: 40.86516321054218
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the development of rendering techniques, computer graphics generated
images (CGIs) have been widely used in practical application scenarios such as
architecture design, video games, simulators, movies, etc. Different from
natural scene images (NSIs), the distortions of CGIs are usually caused by poor
rending settings and limited computation resources. What's more, some CGIs may
also suffer from compression distortions in transmission systems like cloud
gaming and stream media. However, limited work has been put forward to tackle
the problem of computer graphics generated images' quality assessment (CG-IQA).
Therefore, in this paper, we establish a large-scale subjective CG-IQA database
to deal with the challenge of CG-IQA tasks. We collect 25,454 in-the-wild CGIs
through previous databases and personal collection. After data cleaning, we
carefully select 1,200 CGIs to conduct the subjective experiment. Several
popular no-reference image quality assessment (NR-IQA) methods are tested on
our database. The experimental results show that the handcrafted-based methods
achieve low correlation with subjective judgment and deep learning based
methods obtain relatively better performance, which demonstrates that the
current NR-IQA models are not suitable for CG-IQA tasks and more effective
models are urgently needed.
Related papers
- DP-IQA: Utilizing Diffusion Prior for Blind Image Quality Assessment in the Wild [54.139923409101044]
Blind image quality assessment (IQA) in the wild presents significant challenges.
Given the difficulty in collecting large-scale training data, leveraging limited data to develop a model with strong generalization remains an open problem.
Motivated by the robust image perception capabilities of pre-trained text-to-image (T2I) diffusion models, we propose a novel IQA method, diffusion priors-based IQA.
arXiv Detail & Related papers (2024-05-30T12:32:35Z) - PKU-AIGIQA-4K: A Perceptual Quality Assessment Database for Both Text-to-Image and Image-to-Image AI-Generated Images [1.5265677582796984]
We establish a large scale perceptual quality assessment database for both text-to-image and image-to-image AIGIs, named PKU-AIGIQA-4K.
We propose three image quality assessment (IQA) methods based on pre-trained models that include a no-reference method NR-AIGCIQA, a full-reference method FR-AIGCIQA, and a partial-reference method PR-AIGCIQA.
arXiv Detail & Related papers (2024-04-29T03:57:43Z) - PKU-I2IQA: An Image-to-Image Quality Assessment Database for AI
Generated Images [1.6031185986328562]
We establish a human perception-based image-to-image AIGCIQA database, named PKU-I2IQA.
We propose two benchmark models: NR-AIGCIQA based on the no-reference image quality assessment method and FR-AIGCIQA based on the full-reference image quality assessment method.
arXiv Detail & Related papers (2023-11-27T05:53:03Z) - AGIQA-3K: An Open Database for AI-Generated Image Quality Assessment [62.8834581626703]
We build the most comprehensive subjective quality database AGIQA-3K so far.
We conduct a benchmark experiment on this database to evaluate the consistency between the current Image Quality Assessment (IQA) model and human perception.
We believe that the fine-grained subjective scores in AGIQA-3K will inspire subsequent AGI quality models to fit human subjective perception mechanisms.
arXiv Detail & Related papers (2023-06-07T18:28:21Z) - Subjective and Objective Quality Assessment for in-the-Wild Computer
Graphics Images [57.02760260360728]
We build a large-scale in-the-wild CGIQA database consisting of 6,000 CGIs (CGIQA-6k)
We propose an effective deep learning-based no-reference (NR) IQA model by utilizing both distortion and aesthetic quality representation.
Experimental results show that the proposed method outperforms all other state-of-the-art NR IQA methods on the constructed CGIQA-6k database.
arXiv Detail & Related papers (2023-03-14T16:32:24Z) - Perceptual Quality Assessment of Omnidirectional Images [81.76416696753947]
We first establish an omnidirectional IQA (OIQA) database, which includes 16 source images and 320 distorted images degraded by 4 commonly encountered distortion types.
Then a subjective quality evaluation study is conducted on the OIQA database in the VR environment.
The original and distorted omnidirectional images, subjective quality ratings, and the head and eye movement data together constitute the OIQA database.
arXiv Detail & Related papers (2022-07-06T13:40:38Z) - Confusing Image Quality Assessment: Towards Better Augmented Reality
Experience [96.29124666702566]
We consider AR technology as the superimposition of virtual scenes and real scenes, and introduce visual confusion as its basic theory.
A ConFusing Image Quality Assessment (CFIQA) database is established, which includes 600 reference images and 300 distorted images generated by mixing reference images in pairs.
An objective metric termed CFIQA is also proposed to better evaluate the confusing image quality.
arXiv Detail & Related papers (2022-04-11T07:03:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.