Confusing Image Quality Assessment: Towards Better Augmented Reality
Experience
- URL: http://arxiv.org/abs/2204.04900v1
- Date: Mon, 11 Apr 2022 07:03:06 GMT
- Title: Confusing Image Quality Assessment: Towards Better Augmented Reality
Experience
- Authors: Huiyu Duan, Xiongkuo Min, Yucheng Zhu, Guangtao Zhai, Xiaokang Yang,
Patrick Le Callet
- Abstract summary: We consider AR technology as the superimposition of virtual scenes and real scenes, and introduce visual confusion as its basic theory.
A ConFusing Image Quality Assessment (CFIQA) database is established, which includes 600 reference images and 300 distorted images generated by mixing reference images in pairs.
An objective metric termed CFIQA is also proposed to better evaluate the confusing image quality.
- Score: 96.29124666702566
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the development of multimedia technology, Augmented Reality (AR) has
become a promising next-generation mobile platform. The primary value of AR is
to promote the fusion of digital contents and real-world environments, however,
studies on how this fusion will influence the Quality of Experience (QoE) of
these two components are lacking. To achieve better QoE of AR, whose two layers
are influenced by each other, it is important to evaluate its perceptual
quality first. In this paper, we consider AR technology as the superimposition
of virtual scenes and real scenes, and introduce visual confusion as its basic
theory. A more general problem is first proposed, which is evaluating the
perceptual quality of superimposed images, i.e., confusing image quality
assessment. A ConFusing Image Quality Assessment (CFIQA) database is
established, which includes 600 reference images and 300 distorted images
generated by mixing reference images in pairs. Then a subjective quality
perception study and an objective model evaluation experiment are conducted
towards attaining a better understanding of how humans perceive the confusing
images. An objective metric termed CFIQA is also proposed to better evaluate
the confusing image quality. Moreover, an extended ARIQA study is further
conducted based on the CFIQA study. We establish an ARIQA database to better
simulate the real AR application scenarios, which contains 20 AR reference
images, 20 background (BG) reference images, and 560 distorted images generated
from AR and BG references, as well as the correspondingly collected subjective
quality ratings. We also design three types of full-reference (FR) IQA metrics
to study whether we should consider the visual confusion when designing
corresponding IQA algorithms. An ARIQA metric is finally proposed for better
evaluating the perceptual quality of AR images.
Related papers
- Reference-Free Image Quality Metric for Degradation and Reconstruction Artifacts [2.5282283486446753]
We develop a reference-free quality evaluation network, dubbed "Quality Factor (QF) Predictor"
Our QF Predictor is a lightweight, fully convolutional network comprising seven layers.
It receives JPEG compressed image patch with a random QF as input, is trained to accurately predict the corresponding QF.
arXiv Detail & Related papers (2024-05-01T22:28:18Z) - AIGCOIQA2024: Perceptual Quality Assessment of AI Generated Omnidirectional Images [70.42666704072964]
We establish a large-scale AI generated omnidirectional image IQA database named AIGCOIQA2024.
A subjective IQA experiment is conducted to assess human visual preferences from three perspectives.
We conduct a benchmark experiment to evaluate the performance of state-of-the-art IQA models on our database.
arXiv Detail & Related papers (2024-04-01T10:08:23Z) - AGIQA-3K: An Open Database for AI-Generated Image Quality Assessment [62.8834581626703]
We build the most comprehensive subjective quality database AGIQA-3K so far.
We conduct a benchmark experiment on this database to evaluate the consistency between the current Image Quality Assessment (IQA) model and human perception.
We believe that the fine-grained subjective scores in AGIQA-3K will inspire subsequent AGI quality models to fit human subjective perception mechanisms.
arXiv Detail & Related papers (2023-06-07T18:28:21Z) - Re-IQA: Unsupervised Learning for Image Quality Assessment in the Wild [38.197794061203055]
We propose a Mixture of Experts approach to train two separate encoders to learn high-level content and low-level image quality features in an unsupervised setting.
We deploy the complementary low and high-level image representations obtained from the Re-IQA framework to train a linear regression model.
Our method achieves state-of-the-art performance on multiple large-scale image quality assessment databases.
arXiv Detail & Related papers (2023-04-02T05:06:51Z) - Perceptual Quality Assessment of Omnidirectional Images [81.76416696753947]
We first establish an omnidirectional IQA (OIQA) database, which includes 16 source images and 320 distorted images degraded by 4 commonly encountered distortion types.
Then a subjective quality evaluation study is conducted on the OIQA database in the VR environment.
The original and distorted omnidirectional images, subjective quality ratings, and the head and eye movement data together constitute the OIQA database.
arXiv Detail & Related papers (2022-07-06T13:40:38Z) - SPQE: Structure-and-Perception-Based Quality Evaluation for Image
Super-Resolution [24.584839578742237]
Super-Resolution technique has greatly improved the visual quality of images by enhancing their resolutions.
It also calls for an efficient SR Image Quality Assessment (SR-IQA) to evaluate those algorithms or their generated images.
In emerging deep-learning-based SR, a generated high-quality, visually pleasing image may have different structures from its corresponding low-quality image.
arXiv Detail & Related papers (2022-05-07T07:52:55Z) - Conformer and Blind Noisy Students for Improved Image Quality Assessment [80.57006406834466]
Learning-based approaches for perceptual image quality assessment (IQA) usually require both the distorted and reference image for measuring the perceptual quality accurately.
In this work, we explore the performance of transformer-based full-reference IQA models.
We also propose a method for IQA based on semi-supervised knowledge distillation from full-reference teacher models into blind student models.
arXiv Detail & Related papers (2022-04-27T10:21:08Z) - Learning Conditional Knowledge Distillation for Degraded-Reference Image
Quality Assessment [157.1292674649519]
We propose a practical solution named degraded-reference IQA (DR-IQA)
DR-IQA exploits the inputs of IR models, degraded images, as references.
Our results can even be close to the performance of full-reference settings.
arXiv Detail & Related papers (2021-08-18T02:35:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.