Content-Diverse Comparisons improve IQA
- URL: http://arxiv.org/abs/2211.05215v1
- Date: Wed, 9 Nov 2022 21:53:13 GMT
- Title: Content-Diverse Comparisons improve IQA
- Authors: William Thong, Jose Costa Pereira, Sarah Parisot, Ales Leonardis,
Steven McDonagh
- Abstract summary: Image quality assessment (IQA) forms a natural and often straightforward undertaking for humans, yet effective automation of the task remains challenging.
Recent metrics from the deep learning community commonly compare image pairs during training to improve upon traditional metrics such as PSNR or SSIM.
This restricts the diversity and number of image pairs that the model is exposed to during training.
In this paper, we strive to enrich these comparisons with content diversity. Firstly, we relax comparison constraints, and compare pairs of images with differing content. This increases the variety of available comparisons.
- Score: 23.523537785599913
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image quality assessment (IQA) forms a natural and often straightforward
undertaking for humans, yet effective automation of the task remains highly
challenging. Recent metrics from the deep learning community commonly compare
image pairs during training to improve upon traditional metrics such as PSNR or
SSIM. However, current comparisons ignore the fact that image content affects
quality assessment as comparisons only occur between images of similar content.
This restricts the diversity and number of image pairs that the model is
exposed to during training. In this paper, we strive to enrich these
comparisons with content diversity. Firstly, we relax comparison constraints,
and compare pairs of images with differing content. This increases the variety
of available comparisons. Secondly, we introduce listwise comparisons to
provide a holistic view to the model. By including differentiable regularizers,
derived from correlation coefficients, models can better adjust predicted
scores relative to one another. Evaluation on multiple benchmarks, covering a
wide range of distortions and image content, shows the effectiveness of our
learning scheme for training image quality assessment models.
Related papers
- Local Manifold Learning for No-Reference Image Quality Assessment [68.9577503732292]
We propose an innovative framework that integrates local manifold learning with contrastive learning for No-Reference Image Quality Assessment (NR-IQA)
Our approach demonstrates a better performance compared to state-of-the-art methods in 7 standard datasets.
arXiv Detail & Related papers (2024-06-27T15:14:23Z) - Adaptive Image Quality Assessment via Teaching Large Multimodal Model to Compare [99.57567498494448]
We introduce Compare2Score, an all-around LMM-based no-reference IQA model.
During training, we generate scaled-up comparative instructions by comparing images from the same IQA dataset.
Experiments on nine IQA datasets validate that the Compare2Score effectively bridges text-defined comparative levels during training.
arXiv Detail & Related papers (2024-05-29T17:26:09Z) - Beyond MOS: Subjective Image Quality Score Preprocessing Method Based on Perceptual Similarity [2.290956583394892]
ITU-R BT.500, ITU-T P.910, and ITU-T P.913 have been standardized to clean up the original opinion scores.
PSP exploit the perceptual similarity between images to alleviate subjective bias in less annotated scenarios.
arXiv Detail & Related papers (2024-04-30T16:01:14Z) - Pairwise Comparisons Are All You Need [22.798716660911833]
Blind image quality assessment (BIQA) approaches often fall short in real-world scenarios due to their reliance on a generic quality standard applied uniformly across diverse images.
This paper introduces PICNIQ, a pairwise comparison framework designed to bypass the limitations of conventional BIQA.
By employing psychometric scaling algorithms, PICNIQ transforms pairwise comparisons into just-objectionable-difference (JOD) quality scores, offering a granular and interpretable measure of image quality.
arXiv Detail & Related papers (2024-03-13T23:43:36Z) - Comparison of No-Reference Image Quality Models via MAP Estimation in
Diffusion Latents [99.19391983670569]
We show that NR-IQA models can be plugged into the maximum a posteriori (MAP) estimation framework for image enhancement.
Different NR-IQA models are likely to induce different enhanced images, which are ultimately subject to psychophysical testing.
This leads to a new computational method for comparing NR-IQA models within the analysis-by-synthesis framework.
arXiv Detail & Related papers (2024-03-11T03:35:41Z) - Image Similarity using An Ensemble of Context-Sensitive Models [2.9490616593440317]
We present a more intuitive approach to build and compare image similarity models based on labelled data.
We address the challenges of sparse sampling in the image space (R, A, B) and biases in the models trained with context-based data.
Our testing results show that the ensemble model constructed performs 5% better than the best individual context-sensitive models.
arXiv Detail & Related papers (2024-01-15T20:23:05Z) - Conformer and Blind Noisy Students for Improved Image Quality Assessment [80.57006406834466]
Learning-based approaches for perceptual image quality assessment (IQA) usually require both the distorted and reference image for measuring the perceptual quality accurately.
In this work, we explore the performance of transformer-based full-reference IQA models.
We also propose a method for IQA based on semi-supervised knowledge distillation from full-reference teacher models into blind student models.
arXiv Detail & Related papers (2022-04-27T10:21:08Z) - Image Quality Assessment using Contrastive Learning [50.265638572116984]
We train a deep Convolutional Neural Network (CNN) using a contrastive pairwise objective to solve the auxiliary problem.
We show through extensive experiments that CONTRIQUE achieves competitive performance when compared to state-of-the-art NR image quality models.
Our results suggest that powerful quality representations with perceptual relevance can be obtained without requiring large labeled subjective image quality datasets.
arXiv Detail & Related papers (2021-10-25T21:01:00Z) - Rank-smoothed Pairwise Learning In Perceptual Quality Assessment [26.599014990168836]
We show that regularizing pairwise empirical probabilities with aggregated rankwise probabilities leads to a more reliable training loss.
We show that training a deep image quality assessment model with our rank-smoothed loss consistently improves the accuracy of predicting human preferences.
arXiv Detail & Related papers (2020-11-21T23:33:14Z) - I Am Going MAD: Maximum Discrepancy Competition for Comparing
Classifiers Adaptively [135.7695909882746]
We name the MAximum Discrepancy (MAD) competition.
We adaptively sample a small test set from an arbitrarily large corpus of unlabeled images.
Human labeling on the resulting model-dependent image sets reveals the relative performance of the competing classifiers.
arXiv Detail & Related papers (2020-02-25T03:32:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.