Content-Variant Reference Image Quality Assessment via Knowledge
Distillation
- URL: http://arxiv.org/abs/2202.13123v1
- Date: Sat, 26 Feb 2022 12:04:56 GMT
- Title: Content-Variant Reference Image Quality Assessment via Knowledge
Distillation
- Authors: Guanghao Yin, Wei Wang, Zehuan Yuan, Chuchu Han, Wei Ji, Shouqian Sun,
Changhu Wang
- Abstract summary: We propose a content-variant reference method via knowledge distillation (CVRKD-IQA)
Specifically, we use non-aligned reference (NAR) images to introduce various prior distributions of high-quality images.
Our model can outperform all NAR/NR-IQA SOTAs, even reach comparable performance with FR-IQA methods on some occasions.
- Score: 35.4412922147879
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generally, humans are more skilled at perceiving differences between
high-quality (HQ) and low-quality (LQ) images than directly judging the quality
of a single LQ image. This situation also applies to image quality assessment
(IQA). Although recent no-reference (NR-IQA) methods have made great progress
to predict image quality free from the reference image, they still have the
potential to achieve better performance since HQ image information is not fully
exploited. In contrast, full-reference (FR-IQA) methods tend to provide more
reliable quality evaluation, but its practicability is affected by the
requirement for pixel-level aligned reference images. To address this, we
firstly propose the content-variant reference method via knowledge distillation
(CVRKD-IQA). Specifically, we use non-aligned reference (NAR) images to
introduce various prior distributions of high-quality images. The comparisons
of distribution differences between HQ and LQ images can help our model better
assess the image quality. Further, the knowledge distillation transfers more
HQ-LQ distribution difference information from the FR-teacher to the
NAR-student and stabilizing CVRKD-IQA performance. Moreover, to fully mine the
local-global combined information, while achieving faster inference speed, our
model directly processes multiple image patches from the input with the
MLP-mixer. Cross-dataset experiments verify that our model can outperform all
NAR/NR-IQA SOTAs, even reach comparable performance with FR-IQA methods on some
occasions. Since the content-variant and non-aligned reference HQ images are
easy to obtain, our model can support more IQA applications with its relative
robustness to content variations. Our code and more detailed elaborations of
supplements are available: https://github.com/guanghaoyin/CVRKD-IQA.
Related papers
- GenzIQA: Generalized Image Quality Assessment using Prompt-Guided Latent Diffusion Models [7.291687946822539]
A major drawback of state-of-the-art NR-IQA methods is their limited ability to generalize across diverse IQA settings.
Recent text-to-image generative models generate meaningful visual concepts with fine details related to text concepts.
In this work, we leverage the denoising process of such diffusion models for generalized IQA by understanding the degree of alignment between learnable quality-aware text prompts and images.
arXiv Detail & Related papers (2024-06-07T05:46:39Z) - Adaptive Image Quality Assessment via Teaching Large Multimodal Model to Compare [99.57567498494448]
We introduce Compare2Score, an all-around LMM-based no-reference IQA model.
During training, we generate scaled-up comparative instructions by comparing images from the same IQA dataset.
Experiments on nine IQA datasets validate that the Compare2Score effectively bridges text-defined comparative levels during training.
arXiv Detail & Related papers (2024-05-29T17:26:09Z) - Descriptive Image Quality Assessment in the Wild [25.503311093471076]
VLM-based Image Quality Assessment (IQA) seeks to describe image quality linguistically to align with human expression.
We introduce Depicted image Quality Assessment in the Wild (DepictQA-Wild)
Our method includes a multi-functional IQA task paradigm that encompasses both assessment and comparison tasks, brief and detailed responses, full-reference and non-reference scenarios.
arXiv Detail & Related papers (2024-05-29T07:49:15Z) - Cross-IQA: Unsupervised Learning for Image Quality Assessment [3.2287957986061038]
We propose a no-reference image quality assessment (NR-IQA) method termed Cross-IQA based on vision transformer(ViT) model.
The proposed Cross-IQA method can learn image quality features from unlabeled image data.
Experimental results show that Cross-IQA can achieve state-of-the-art performance in assessing the low-frequency degradation information.
arXiv Detail & Related papers (2024-05-07T13:35:51Z) - Reference-Free Image Quality Metric for Degradation and Reconstruction Artifacts [2.5282283486446753]
We develop a reference-free quality evaluation network, dubbed "Quality Factor (QF) Predictor"
Our QF Predictor is a lightweight, fully convolutional network comprising seven layers.
It receives JPEG compressed image patch with a random QF as input, is trained to accurately predict the corresponding QF.
arXiv Detail & Related papers (2024-05-01T22:28:18Z) - Comparison of No-Reference Image Quality Models via MAP Estimation in
Diffusion Latents [99.19391983670569]
We show that NR-IQA models can be plugged into the maximum a posteriori (MAP) estimation framework for image enhancement.
Different NR-IQA models are likely to induce different enhanced images, which are ultimately subject to psychophysical testing.
This leads to a new computational method for comparing NR-IQA models within the analysis-by-synthesis framework.
arXiv Detail & Related papers (2024-03-11T03:35:41Z) - Less is More: Learning Reference Knowledge Using No-Reference Image
Quality Assessment [58.09173822651016]
We argue that it is possible to learn reference knowledge under the No-Reference Image Quality Assessment setting.
We propose a new framework to learn comparative knowledge from non-aligned reference images.
Experiments on eight standard NR-IQA datasets demonstrate the superior performance to the state-of-the-art NR-IQA methods.
arXiv Detail & Related papers (2023-12-01T13:56:01Z) - Learning Transformer Features for Image Quality Assessment [53.51379676690971]
We propose a unified IQA framework that utilizes CNN backbone and transformer encoder to extract features.
The proposed framework is compatible with both FR and NR modes and allows for a joint training scheme.
arXiv Detail & Related papers (2021-12-01T13:23:00Z) - Learning Conditional Knowledge Distillation for Degraded-Reference Image
Quality Assessment [157.1292674649519]
We propose a practical solution named degraded-reference IQA (DR-IQA)
DR-IQA exploits the inputs of IR models, degraded images, as references.
Our results can even be close to the performance of full-reference settings.
arXiv Detail & Related papers (2021-08-18T02:35:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.