MetaIQA: Deep Meta-learning for No-Reference Image Quality Assessment
- URL: http://arxiv.org/abs/2004.05508v1
- Date: Sat, 11 Apr 2020 23:36:36 GMT
- Title: MetaIQA: Deep Meta-learning for No-Reference Image Quality Assessment
- Authors: Hancheng Zhu, Leida Li, Jinjian Wu, Weisheng Dong, and Guangming Shi
- Abstract summary: This paper presents a no-reference IQA metric based on deep meta-learning.
We first collect a number of NR-IQA tasks for different distortions.
Then meta-learning is adopted to learn the prior knowledge shared by diversified distortions.
Extensive experiments demonstrate that the proposed metric outperforms the state-of-the-arts by a large margin.
- Score: 73.55944459902041
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, increasing interest has been drawn in exploiting deep convolutional
neural networks (DCNNs) for no-reference image quality assessment (NR-IQA).
Despite of the notable success achieved, there is a broad consensus that
training DCNNs heavily relies on massive annotated data. Unfortunately, IQA is
a typical small sample problem. Therefore, most of the existing DCNN-based IQA
metrics operate based on pre-trained networks. However, these pre-trained
networks are not designed for IQA task, leading to generalization problem when
evaluating different types of distortions. With this motivation, this paper
presents a no-reference IQA metric based on deep meta-learning. The underlying
idea is to learn the meta-knowledge shared by human when evaluating the quality
of images with various distortions, which can then be adapted to unknown
distortions easily. Specifically, we first collect a number of NR-IQA tasks for
different distortions. Then meta-learning is adopted to learn the prior
knowledge shared by diversified distortions. Finally, the quality prior model
is fine-tuned on a target NR-IQA task for quickly obtaining the quality model.
Extensive experiments demonstrate that the proposed metric outperforms the
state-of-the-arts by a large margin. Furthermore, the meta-model learned from
synthetic distortions can also be easily generalized to authentic distortions,
which is highly desired in real-world applications of IQA metrics.
Related papers
- DP-IQA: Utilizing Diffusion Prior for Blind Image Quality Assessment in the Wild [54.139923409101044]
Blind image quality assessment (IQA) in the wild presents significant challenges.
Given the difficulty in collecting large-scale training data, leveraging limited data to develop a model with strong generalization remains an open problem.
Motivated by the robust image perception capabilities of pre-trained text-to-image (T2I) diffusion models, we propose a novel IQA method, diffusion priors-based IQA.
arXiv Detail & Related papers (2024-05-30T12:32:35Z) - Large Multi-modality Model Assisted AI-Generated Image Quality Assessment [53.182136445844904]
We introduce a large Multi-modality model Assisted AI-Generated Image Quality Assessment (MA-AGIQA) model.
It uses semantically informed guidance to sense semantic information and extract semantic vectors through carefully designed text prompts.
It achieves state-of-the-art performance, and demonstrates its superior generalization capabilities on assessing the quality of AI-generated images.
arXiv Detail & Related papers (2024-04-27T02:40:36Z) - Transformer-based No-Reference Image Quality Assessment via Supervised
Contrastive Learning [36.695247860715874]
We propose a novel Contrastive Learning (SCL) and Transformer-based NR-IQA model SaTQA.
We first train a model on a large-scale synthetic dataset by SCL to extract degradation features of images with various distortion types and levels.
To further extract distortion information from images, we propose a backbone network incorporating the Multi-Stream Block (MSB) by combining the CNN inductive bias and Transformer long-term dependence modeling capability.
Experimental results on seven standard IQA datasets show that SaTQA outperforms the state-of-the-art methods for both synthetic and authentic datasets
arXiv Detail & Related papers (2023-12-12T06:01:41Z) - Perceptual Attacks of No-Reference Image Quality Models with
Human-in-the-Loop [113.75573175709573]
We make one of the first attempts to examine the perceptual robustness of NR-IQA models.
We test one knowledge-driven and three data-driven NR-IQA methods under four full-reference IQA models.
We find that all four NR-IQA models are vulnerable to the proposed perceptual attack.
arXiv Detail & Related papers (2022-10-03T13:47:16Z) - Learning Transformer Features for Image Quality Assessment [53.51379676690971]
We propose a unified IQA framework that utilizes CNN backbone and transformer encoder to extract features.
The proposed framework is compatible with both FR and NR modes and allows for a joint training scheme.
arXiv Detail & Related papers (2021-12-01T13:23:00Z) - Degraded Reference Image Quality Assessment [23.871178105179883]
We make one of the first attempts to establish a new paradigm named degraded-reference IQA (DR IQA)
Specifically, we lay out the architectures of DR IQA and introduce a 6-bit code to denote the choices of configurations.
We construct the first large-scale databases dedicated to DR IQA and will make them publicly available.
arXiv Detail & Related papers (2021-10-28T05:50:59Z) - Image Quality Assessment using Contrastive Learning [50.265638572116984]
We train a deep Convolutional Neural Network (CNN) using a contrastive pairwise objective to solve the auxiliary problem.
We show through extensive experiments that CONTRIQUE achieves competitive performance when compared to state-of-the-art NR image quality models.
Our results suggest that powerful quality representations with perceptual relevance can be obtained without requiring large labeled subjective image quality datasets.
arXiv Detail & Related papers (2021-10-25T21:01:00Z) - Norm-in-Norm Loss with Faster Convergence and Better Performance for
Image Quality Assessment [20.288424566444224]
We explore normalization in the design of loss functions for image quality assessment (IQA) models.
The resulting "Norm-in-Norm'' loss encourages the IQA model to make linear predictions with respect to subjective quality scores.
Experiments on two relevant datasets show that, compared to MAE or MSE loss, the new loss enables the IQA model to converge about 10 times faster.
arXiv Detail & Related papers (2020-08-10T04:01:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.