Degraded Reference Image Quality Assessment
- URL: http://arxiv.org/abs/2110.14899v1
- Date: Thu, 28 Oct 2021 05:50:59 GMT
- Title: Degraded Reference Image Quality Assessment
- Authors: Shahrukh Athar, Zhou Wang
- Abstract summary: We make one of the first attempts to establish a new paradigm named degraded-reference IQA (DR IQA)
Specifically, we lay out the architectures of DR IQA and introduce a 6-bit code to denote the choices of configurations.
We construct the first large-scale databases dedicated to DR IQA and will make them publicly available.
- Score: 23.871178105179883
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In practical media distribution systems, visual content usually undergoes
multiple stages of quality degradation along the delivery chain, but the
pristine source content is rarely available at most quality monitoring points
along the chain to serve as a reference for quality assessment. As a result,
full-reference (FR) and reduced-reference (RR) image quality assessment (IQA)
methods are generally infeasible. Although no-reference (NR) methods are
readily applicable, their performance is often not reliable. On the other hand,
intermediate references of degraded quality are often available, e.g., at the
input of video transcoders, but how to make the best use of them in proper ways
has not been deeply investigated. Here we make one of the first attempts to
establish a new paradigm named degraded-reference IQA (DR IQA). Specifically,
we lay out the architectures of DR IQA and introduce a 6-bit code to denote the
choices of configurations. We construct the first large-scale databases
dedicated to DR IQA and will make them publicly available. We make novel
observations on distortion behavior in multi-stage distortion pipelines by
comprehensively analyzing five multiple distortion combinations. Based on these
observations, we develop novel DR IQA models and make extensive comparisons
with a series of baseline models derived from top-performing FR and NR models.
The results suggest that DR IQA may offer significant performance improvement
in multiple distortion environments, thereby establishing DR IQA as a valid IQA
paradigm that is worth further exploration.
Related papers
- DP-IQA: Utilizing Diffusion Prior for Blind Image Quality Assessment in the Wild [54.139923409101044]
Blind image quality assessment (IQA) in the wild presents significant challenges.
Given the difficulty in collecting large-scale training data, leveraging limited data to develop a model with strong generalization remains an open problem.
Motivated by the robust image perception capabilities of pre-trained text-to-image (T2I) diffusion models, we propose a novel IQA method, diffusion priors-based IQA.
arXiv Detail & Related papers (2024-05-30T12:32:35Z) - Transformer-based No-Reference Image Quality Assessment via Supervised
Contrastive Learning [36.695247860715874]
We propose a novel Contrastive Learning (SCL) and Transformer-based NR-IQA model SaTQA.
We first train a model on a large-scale synthetic dataset by SCL to extract degradation features of images with various distortion types and levels.
To further extract distortion information from images, we propose a backbone network incorporating the Multi-Stream Block (MSB) by combining the CNN inductive bias and Transformer long-term dependence modeling capability.
Experimental results on seven standard IQA datasets show that SaTQA outperforms the state-of-the-art methods for both synthetic and authentic datasets
arXiv Detail & Related papers (2023-12-12T06:01:41Z) - You Only Train Once: A Unified Framework for Both Full-Reference and No-Reference Image Quality Assessment [45.62136459502005]
We propose a network to perform full reference (FR) and no reference (NR) IQA.
We first employ an encoder to extract multi-level features from input images.
A Hierarchical Attention (HA) module is proposed as a universal adapter for both FR and NR inputs.
A Semantic Distortion Aware (SDA) module is proposed to examine feature correlations between shallow and deep layers of the encoder.
arXiv Detail & Related papers (2023-10-14T11:03:04Z) - SQUARE: Automatic Question Answering Evaluation using Multiple Positive
and Negative References [73.67707138779245]
We propose a new evaluation metric: SQuArE (Sentence-level QUestion AnsweRing Evaluation)
We evaluate SQuArE on both sentence-level extractive (Answer Selection) and generative (GenQA) QA systems.
arXiv Detail & Related papers (2023-09-21T16:51:30Z) - CONVIQT: Contrastive Video Quality Estimator [63.749184706461826]
Perceptual video quality assessment (VQA) is an integral component of many streaming and video sharing platforms.
Here we consider the problem of learning perceptually relevant video quality representations in a self-supervised manner.
Our results indicate that compelling representations with perceptual bearing can be obtained using self-supervised learning.
arXiv Detail & Related papers (2022-06-29T15:22:01Z) - Learning Transformer Features for Image Quality Assessment [53.51379676690971]
We propose a unified IQA framework that utilizes CNN backbone and transformer encoder to extract features.
The proposed framework is compatible with both FR and NR modes and allows for a joint training scheme.
arXiv Detail & Related papers (2021-12-01T13:23:00Z) - A Shift-insensitive Full Reference Image Quality Assessment Model Based
on Quadratic Sum of Gradient Magnitude and LOG signals [7.0736273644584715]
We propose an FR-IQA model with the quadratic sum of the GM and the LOG signals, which obtains good performance in image quality estimation.
Experimental results show that the proposed model works robustly on three large scale subjective IQA databases.
arXiv Detail & Related papers (2020-12-21T17:41:07Z) - Norm-in-Norm Loss with Faster Convergence and Better Performance for
Image Quality Assessment [20.288424566444224]
We explore normalization in the design of loss functions for image quality assessment (IQA) models.
The resulting "Norm-in-Norm'' loss encourages the IQA model to make linear predictions with respect to subjective quality scores.
Experiments on two relevant datasets show that, compared to MAE or MSE loss, the new loss enables the IQA model to converge about 10 times faster.
arXiv Detail & Related papers (2020-08-10T04:01:21Z) - MetaIQA: Deep Meta-learning for No-Reference Image Quality Assessment [73.55944459902041]
This paper presents a no-reference IQA metric based on deep meta-learning.
We first collect a number of NR-IQA tasks for different distortions.
Then meta-learning is adopted to learn the prior knowledge shared by diversified distortions.
Extensive experiments demonstrate that the proposed metric outperforms the state-of-the-arts by a large margin.
arXiv Detail & Related papers (2020-04-11T23:36:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.