Can No-reference features help in Full-reference image quality
estimation?
- URL: http://arxiv.org/abs/2203.00845v1
- Date: Wed, 2 Mar 2022 03:39:28 GMT
- Title: Can No-reference features help in Full-reference image quality
estimation?
- Authors: Saikat Dutta, Sourya Dipta Das, Nisarg A. Shah
- Abstract summary: We study utilization of no-reference features in Full-reference IQA task.
Our model achieves higher SRCC and KRCC scores than a number of state-of-the-art algorithms.
- Score: 20.491565297561912
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Development of perceptual image quality assessment (IQA) metrics has been of
significant interest to computer vision community. The aim of these metrics is
to model quality of an image as perceived by humans. Recent works in
Full-reference IQA research perform pixelwise comparison between deep features
corresponding to query and reference images for quality prediction. However,
pixelwise feature comparison may not be meaningful if distortion present in
query image is severe. In this context, we explore utilization of no-reference
features in Full-reference IQA task. Our model consists of both full-reference
and no-reference branches. Full-reference branches use both distorted and
reference images, whereas No-reference branch only uses distorted image. Our
experiments show that use of no-reference features boosts performance of image
quality assessment. Our model achieves higher SRCC and KRCC scores than a
number of state-of-the-art algorithms on KADID-10K and PIPAL datasets.
Related papers
- Adaptive Image Quality Assessment via Teaching Large Multimodal Model to Compare [99.57567498494448]
We introduce Compare2Score, an all-around LMM-based no-reference IQA model.
During training, we generate scaled-up comparative instructions by comparing images from the same IQA dataset.
Experiments on nine IQA datasets validate that the Compare2Score effectively bridges text-defined comparative levels during training.
arXiv Detail & Related papers (2024-05-29T17:26:09Z) - Reference-Free Image Quality Metric for Degradation and Reconstruction Artifacts [2.5282283486446753]
We develop a reference-free quality evaluation network, dubbed "Quality Factor (QF) Predictor"
Our QF Predictor is a lightweight, fully convolutional network comprising seven layers.
It receives JPEG compressed image patch with a random QF as input, is trained to accurately predict the corresponding QF.
arXiv Detail & Related papers (2024-05-01T22:28:18Z) - Pairwise Comparisons Are All You Need [22.798716660911833]
Blind image quality assessment (BIQA) approaches often fall short in real-world scenarios due to their reliance on a generic quality standard applied uniformly across diverse images.
This paper introduces PICNIQ, a pairwise comparison framework designed to bypass the limitations of conventional BIQA.
By employing psychometric scaling algorithms, PICNIQ transforms pairwise comparisons into just-objectionable-difference (JOD) quality scores, offering a granular and interpretable measure of image quality.
arXiv Detail & Related papers (2024-03-13T23:43:36Z) - Adaptive Feature Selection for No-Reference Image Quality Assessment by Mitigating Semantic Noise Sensitivity [55.399230250413986]
We propose a Quality-Aware Feature Matching IQA Metric (QFM-IQM) to remove harmful semantic noise features from the upstream task.
Our approach achieves superior performance to the state-of-the-art NR-IQA methods on eight standard IQA datasets.
arXiv Detail & Related papers (2023-12-11T06:50:27Z) - QMRNet: Quality Metric Regression for EO Image Quality Assessment and
Super-Resolution [2.425299069769717]
We benchmark state-of-the-art Super-Resolution (SR) algorithms for distinct Earth Observation (EO) datasets.
We also propose a novel Quality Metric Regression Network (QMRNet) that is able to predict quality (as a No-Reference metric) by training on any property of the image.
Overall benchmark shows promising results for LIIF, CAR and MSRN and also the potential use of QMRNet as Loss for optimizing SR predictions.
arXiv Detail & Related papers (2022-10-12T22:51:13Z) - Conformer and Blind Noisy Students for Improved Image Quality Assessment [80.57006406834466]
Learning-based approaches for perceptual image quality assessment (IQA) usually require both the distorted and reference image for measuring the perceptual quality accurately.
In this work, we explore the performance of transformer-based full-reference IQA models.
We also propose a method for IQA based on semi-supervised knowledge distillation from full-reference teacher models into blind student models.
arXiv Detail & Related papers (2022-04-27T10:21:08Z) - Learning Transformer Features for Image Quality Assessment [53.51379676690971]
We propose a unified IQA framework that utilizes CNN backbone and transformer encoder to extract features.
The proposed framework is compatible with both FR and NR modes and allows for a joint training scheme.
arXiv Detail & Related papers (2021-12-01T13:23:00Z) - Image Quality Assessment using Contrastive Learning [50.265638572116984]
We train a deep Convolutional Neural Network (CNN) using a contrastive pairwise objective to solve the auxiliary problem.
We show through extensive experiments that CONTRIQUE achieves competitive performance when compared to state-of-the-art NR image quality models.
Our results suggest that powerful quality representations with perceptual relevance can be obtained without requiring large labeled subjective image quality datasets.
arXiv Detail & Related papers (2021-10-25T21:01:00Z) - Learning Conditional Knowledge Distillation for Degraded-Reference Image
Quality Assessment [157.1292674649519]
We propose a practical solution named degraded-reference IQA (DR-IQA)
DR-IQA exploits the inputs of IR models, degraded images, as references.
Our results can even be close to the performance of full-reference settings.
arXiv Detail & Related papers (2021-08-18T02:35:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.