Cuid: A new study of perceived image quality and its subjective
assessment
- URL: http://arxiv.org/abs/2009.13304v1
- Date: Mon, 28 Sep 2020 13:14:45 GMT
- Title: Cuid: A new study of perceived image quality and its subjective
assessment
- Authors: Lucie L\'ev\^eque (UNIV GUSTAVE EIFFEL), Ji Yang, Xiaohan Yang,
Pengfei Guo, Kenneth Dasalla, Leida Li, Yingying Wu, Hantao Liu
- Abstract summary: We present a new study of image quality perception where subjective ratings were collected in a controlled lab environment.
We investigate how quality perception is affected by a combination of different categories of images and different types and levels of distortions.
- Score: 30.698984450985318
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Research on image quality assessment (IQA) remains limited mainly due to our
incomplete knowledge about human visual perception. Existing IQA algorithms
have been designed or trained with insufficient subjective data with a small
degree of stimulus variability. This has led to challenges for those algorithms
to handle complexity and diversity of real-world digital content. Perceptual
evidence from human subjects serves as a grounding for the development of
advanced IQA algorithms. It is thus critical to acquire reliable subjective
data with controlled perception experiments that faithfully reflect human
behavioural responses to distortions in visual signals. In this paper, we
present a new study of image quality perception where subjective ratings were
collected in a controlled lab environment. We investigate how quality
perception is affected by a combination of different categories of images and
different types and levels of distortions. The database will be made publicly
available to facilitate calibration and validation of IQA algorithms.
Related papers
- Helping Visually Impaired People Take Better Quality Pictures [52.03016269364854]
We develop tools to help visually impaired users minimize occurrences of common technical distortions.
We also create a prototype feedback system that helps to guide users to mitigate quality issues.
arXiv Detail & Related papers (2023-05-14T04:37:53Z) - Subjective and Objective Quality Assessment for in-the-Wild Computer
Graphics Images [57.02760260360728]
We build a large-scale in-the-wild CGIQA database consisting of 6,000 CGIs (CGIQA-6k)
We propose an effective deep learning-based no-reference (NR) IQA model by utilizing both distortion and aesthetic quality representation.
Experimental results show that the proposed method outperforms all other state-of-the-art NR IQA methods on the constructed CGIQA-6k database.
arXiv Detail & Related papers (2023-03-14T16:32:24Z) - Perceptual Attacks of No-Reference Image Quality Models with
Human-in-the-Loop [113.75573175709573]
We make one of the first attempts to examine the perceptual robustness of NR-IQA models.
We test one knowledge-driven and three data-driven NR-IQA methods under four full-reference IQA models.
We find that all four NR-IQA models are vulnerable to the proposed perceptual attack.
arXiv Detail & Related papers (2022-10-03T13:47:16Z) - Exploring CLIP for Assessing the Look and Feel of Images [87.97623543523858]
We introduce Contrastive Language-Image Pre-training (CLIP) models for assessing both the quality perception (look) and abstract perception (feel) of images in a zero-shot manner.
Our results show that CLIP captures meaningful priors that generalize well to different perceptual assessments.
arXiv Detail & Related papers (2022-07-25T17:58:16Z) - Conformer and Blind Noisy Students for Improved Image Quality Assessment [80.57006406834466]
Learning-based approaches for perceptual image quality assessment (IQA) usually require both the distorted and reference image for measuring the perceptual quality accurately.
In this work, we explore the performance of transformer-based full-reference IQA models.
We also propose a method for IQA based on semi-supervised knowledge distillation from full-reference teacher models into blind student models.
arXiv Detail & Related papers (2022-04-27T10:21:08Z) - Confusing Image Quality Assessment: Towards Better Augmented Reality
Experience [96.29124666702566]
We consider AR technology as the superimposition of virtual scenes and real scenes, and introduce visual confusion as its basic theory.
A ConFusing Image Quality Assessment (CFIQA) database is established, which includes 600 reference images and 300 distorted images generated by mixing reference images in pairs.
An objective metric termed CFIQA is also proposed to better evaluate the confusing image quality.
arXiv Detail & Related papers (2022-04-11T07:03:06Z) - Deep Superpixel-based Network for Blind Image Quality Assessment [4.079861933099766]
The goal in a blind image quality assessment (BIQA) model is to simulate the process of evaluating images by human eyes.
We propose a deep adaptive superpixel-based network, namely DSN-IQA, to assess the quality of image based on multi-scale and superpixel segmentation.
arXiv Detail & Related papers (2021-10-13T08:26:58Z) - Estimating MRI Image Quality via Image Reconstruction Uncertainty [4.483523280360846]
We train CNNs using a heteroscedastic uncertainty model to recover clean images from noisy data.
We argue that quality control for visual assessment cannot be equated to quality control for algorithmic processing.
arXiv Detail & Related papers (2021-06-21T11:22:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.