Parameterized Image Quality Score Distribution Prediction
- URL: http://arxiv.org/abs/2203.00926v1
- Date: Wed, 2 Mar 2022 08:13:33 GMT
- Title: Parameterized Image Quality Score Distribution Prediction
- Authors: Yixuan Gao, Xiongkuo Min, Wenhan Zhu, Xiao-Ping Zhang and Guangtao
Zhai
- Abstract summary: We describe image quality using a parameterized distribution rather than a mean opinion score (MOS)
An objective method is also proposed to predictthe image quality score distribution (IQSD)
- Score: 40.397816495489295
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, image quality has been generally describedby a mean opinion score
(MOS). However, we observe that thequality scores of an image given by a group
of subjects are verysubjective and diverse. Thus it is not enough to use a MOS
todescribe the image quality. In this paper, we propose to describeimage
quality using a parameterized distribution rather thana MOS, and an objective
method is also proposed to predictthe image quality score distribution (IQSD).
At first, the LIVEdatabase is re-recorded. Specifically, we have invited a
largegroup of subjects to evaluate the quality of all images in theLIVE
database, and each image is evaluated by a large numberof subjects (187 valid
subjects), whose scores can form a reliableIQSD. By analyzing the obtained
subjective quality scores, wefind that the IQSD can be well modeled by an alpha
stable model,and it can reflect much more information than a single MOS, suchas
the skewness of opinion score, the subject diversity and themaximum probability
score for an image. Therefore, we proposeto model the IQSD using the alpha
stable model. Moreover, wepropose a framework and an algorithm to predict the
alphastable model based IQSD, where quality features are extractedfrom each
image based on structural information and statisticalinformation, and support
vector regressors are trained to predictthe alpha stable model parameters.
Experimental results verifythe feasibility of using alpha stable model to
describe the IQSD,and prove the effectiveness of objective alpha stable model
basedIQSD prediction method.
Related papers
- Exploring Rich Subjective Quality Information for Image Quality Assessment in the Wild [66.40314964321557]
We propose a novel IQA method named RichIQA to explore the rich subjective rating information beyond MOS to predict image quality in the wild.
RichIQA is characterized by two key novel designs: (1) a three-stage image quality prediction network which exploits the powerful feature representation capability of the Convolutional vision Transformer (CvT) and mimics the short-term and long-term memory mechanisms of human brain.
RichIQA outperforms state-of-the-art competitors on multiple large-scale in the wild IQA databases with rich subjective rating labels.
arXiv Detail & Related papers (2024-09-09T12:00:17Z) - Sliced Maximal Information Coefficient: A Training-Free Approach for Image Quality Assessment Enhancement [12.628718661568048]
We aim to explore a generalized human visual attention estimation strategy to mimic the process of human quality rating.
In particular, we model human attention generation by measuring the statistical dependency between the degraded image and the reference image.
Experimental results verify the performance of existing IQA models can be consistently improved when our attention module is incorporated.
arXiv Detail & Related papers (2024-08-19T11:55:32Z) - Adaptive Image Quality Assessment via Teaching Large Multimodal Model to Compare [99.57567498494448]
We introduce Compare2Score, an all-around LMM-based no-reference IQA model.
During training, we generate scaled-up comparative instructions by comparing images from the same IQA dataset.
Experiments on nine IQA datasets validate that the Compare2Score effectively bridges text-defined comparative levels during training.
arXiv Detail & Related papers (2024-05-29T17:26:09Z) - Opinion-Unaware Blind Image Quality Assessment using Multi-Scale Deep Feature Statistics [54.08757792080732]
We propose integrating deep features from pre-trained visual models with a statistical analysis model to achieve opinion-unaware BIQA (OU-BIQA)
Our proposed model exhibits superior consistency with human visual perception compared to state-of-the-art BIQA models.
arXiv Detail & Related papers (2024-05-29T06:09:34Z) - Cross-IQA: Unsupervised Learning for Image Quality Assessment [3.2287957986061038]
We propose a no-reference image quality assessment (NR-IQA) method termed Cross-IQA based on vision transformer(ViT) model.
The proposed Cross-IQA method can learn image quality features from unlabeled image data.
Experimental results show that Cross-IQA can achieve state-of-the-art performance in assessing the low-frequency degradation information.
arXiv Detail & Related papers (2024-05-07T13:35:51Z) - An Image Quality Assessment Dataset for Portraits [0.9786690381850354]
This paper introduces PIQ23, a portrait-specific IQA dataset of 5116 images of 50 scenarios acquired by 100 smartphones.
The dataset includes individuals of various genders and ethnicities who have given explicit and informed consent for their photographs to be used in public research.
An in-depth statistical analysis of these annotations allows us to evaluate their consistency over PIQ23.
arXiv Detail & Related papers (2023-04-12T11:30:06Z) - Blind Multimodal Quality Assessment: A Brief Survey and A Case Study of
Low-light Images [73.27643795557778]
Blind image quality assessment (BIQA) aims at automatically and accurately forecasting objective scores for visual signals.
Recent developments in this field are dominated by unimodal solutions inconsistent with human subjective rating patterns.
We present a unique blind multimodal quality assessment (BMQA) of low-light images from subjective evaluation to objective score.
arXiv Detail & Related papers (2023-03-18T09:04:55Z) - Going the Extra Mile in Face Image Quality Assessment: A Novel Database
and Model [42.05084438912876]
We introduce the largest annotated IQA database developed to date, which contains 20,000 human faces.
We propose a novel deep learning model to accurately predict face image quality, which, for the first time, explores the use of generative priors for IQA.
arXiv Detail & Related papers (2022-07-11T14:28:18Z) - Attentions Help CNNs See Better: Attention-based Hybrid Image Quality
Assessment Network [20.835800149919145]
Image quality assessment (IQA) algorithm aims to quantify the human perception of image quality.
There is a performance drop when assessing distortion images generated by generative adversarial network (GAN) with seemingly realistic texture.
We propose an Attention-based Hybrid Image Quality Assessment Network (AHIQ) to deal with the challenge and get better performance on the GAN-based IQA task.
arXiv Detail & Related papers (2022-04-22T03:59:18Z) - Uncertainty-Aware Blind Image Quality Assessment in the Laboratory and
Wild [98.48284827503409]
We develop a textitunified BIQA model and an approach of training it for both synthetic and realistic distortions.
We employ the fidelity loss to optimize a deep neural network for BIQA over a large number of such image pairs.
Experiments on six IQA databases show the promise of the learned method in blindly assessing image quality in the laboratory and wild.
arXiv Detail & Related papers (2020-05-28T13:35:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.