Deep Neural Networks for Blind Image Quality Assessment: Addressing the
Data Challenge
- URL: http://arxiv.org/abs/2109.12161v1
- Date: Fri, 24 Sep 2021 19:48:52 GMT
- Title: Deep Neural Networks for Blind Image Quality Assessment: Addressing the
Data Challenge
- Authors: Shahrukh Athar, Zhongling Wang, Zhou Wang
- Abstract summary: It is difficult to create human-rated IQA datasets composed of millions of images due to constraints of subjective testing.
We construct a DNN-based BIQA model called EONSS, train it on Waterloo Exploration-II, and test it on nine subject-rated IQA datasets.
- Score: 20.97986692607478
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The enormous space and diversity of natural images is usually represented by
a few small-scale human-rated image quality assessment (IQA) datasets. This
casts great challenges to deep neural network (DNN) based blind IQA (BIQA),
which requires large-scale training data that is representative of the natural
image distribution. It is extremely difficult to create human-rated IQA
datasets composed of millions of images due to constraints of subjective
testing. While a number of efforts have focused on design innovations to
enhance the performance of DNN based BIQA, attempts to address the scarcity of
labeled IQA data remain surprisingly missing. To address this data challenge,
we construct so far the largest IQA database, namely Waterloo Exploration-II,
which contains 3,570 pristine reference and around 3.45 million singly and
multiply distorted images. Since subjective testing for such a large dataset is
nearly impossible, we develop a novel mechanism that synthetically assigns
perceptual quality labels to the distorted images. We construct a DNN-based
BIQA model called EONSS, train it on Waterloo Exploration-II, and test it on
nine subject-rated IQA datasets, without any retraining or fine-tuning. The
results show that with a straightforward DNN architecture, EONSS is able to
outperform the very state-of-the-art in BIQA, both in terms of quality
prediction performance and execution speed. This study strongly supports the
view that the quantity and quality of meaningfully annotated training data,
rather than a sophisticated network architecture or training strategy, is the
dominating factor that determines the performance of DNN-based BIQA models.
(Note: Since this is an ongoing project, the final versions of Waterloo
Exploration-II database, quality annotations, and EONSS, will be made publicly
available in the future when it culminates.)
Related papers
- Exploring Rich Subjective Quality Information for Image Quality Assessment in the Wild [66.40314964321557]
We propose a novel IQA method named RichIQA to explore the rich subjective rating information beyond MOS to predict image quality in the wild.
RichIQA is characterized by two key novel designs: (1) a three-stage image quality prediction network which exploits the powerful feature representation capability of the Convolutional vision Transformer (CvT) and mimics the short-term and long-term memory mechanisms of human brain.
RichIQA outperforms state-of-the-art competitors on multiple large-scale in the wild IQA databases with rich subjective rating labels.
arXiv Detail & Related papers (2024-09-09T12:00:17Z) - DP-IQA: Utilizing Diffusion Prior for Blind Image Quality Assessment in the Wild [54.139923409101044]
Blind image quality assessment (IQA) in the wild presents significant challenges.
Given the difficulty in collecting large-scale training data, leveraging limited data to develop a model with strong generalization remains an open problem.
Motivated by the robust image perception capabilities of pre-trained text-to-image (T2I) diffusion models, we propose a novel IQA method, diffusion priors-based IQA.
arXiv Detail & Related papers (2024-05-30T12:32:35Z) - Cross-IQA: Unsupervised Learning for Image Quality Assessment [3.2287957986061038]
We propose a no-reference image quality assessment (NR-IQA) method termed Cross-IQA based on vision transformer(ViT) model.
The proposed Cross-IQA method can learn image quality features from unlabeled image data.
Experimental results show that Cross-IQA can achieve state-of-the-art performance in assessing the low-frequency degradation information.
arXiv Detail & Related papers (2024-05-07T13:35:51Z) - Large Multi-modality Model Assisted AI-Generated Image Quality Assessment [53.182136445844904]
We introduce a large Multi-modality model Assisted AI-Generated Image Quality Assessment (MA-AGIQA) model.
It uses semantically informed guidance to sense semantic information and extract semantic vectors through carefully designed text prompts.
It achieves state-of-the-art performance, and demonstrates its superior generalization capabilities on assessing the quality of AI-generated images.
arXiv Detail & Related papers (2024-04-27T02:40:36Z) - Transformer-based No-Reference Image Quality Assessment via Supervised
Contrastive Learning [36.695247860715874]
We propose a novel Contrastive Learning (SCL) and Transformer-based NR-IQA model SaTQA.
We first train a model on a large-scale synthetic dataset by SCL to extract degradation features of images with various distortion types and levels.
To further extract distortion information from images, we propose a backbone network incorporating the Multi-Stream Block (MSB) by combining the CNN inductive bias and Transformer long-term dependence modeling capability.
Experimental results on seven standard IQA datasets show that SaTQA outperforms the state-of-the-art methods for both synthetic and authentic datasets
arXiv Detail & Related papers (2023-12-12T06:01:41Z) - Continual Learning for Blind Image Quality Assessment [80.55119990128419]
Blind image quality assessment (BIQA) models fail to continually adapt to subpopulation shift.
Recent work suggests training BIQA methods on the combination of all available human-rated IQA datasets.
We formulate continual learning for BIQA, where a model learns continually from a stream of IQA datasets.
arXiv Detail & Related papers (2021-02-19T03:07:01Z) - Multi-pooled Inception features for no-reference image quality
assessment [0.0]
We propose a new approach for image quality assessment using convolutional neural networks (CNNs)
In contrast to previous methods, we do not take patches from the input image. Instead, the input image is treated as a whole and is run through a pretrained CNN body to extract resolution-independent, multi-level deep features.
We demonstrate that our best proposal - called MultiGAP-NRIQA - is able to provide state-of-the-art results on three benchmark IQA databases.
arXiv Detail & Related papers (2020-11-10T15:09:49Z) - Uncertainty-Aware Blind Image Quality Assessment in the Laboratory and
Wild [98.48284827503409]
We develop a textitunified BIQA model and an approach of training it for both synthetic and realistic distortions.
We employ the fidelity loss to optimize a deep neural network for BIQA over a large number of such image pairs.
Experiments on six IQA databases show the promise of the learned method in blindly assessing image quality in the laboratory and wild.
arXiv Detail & Related papers (2020-05-28T13:35:23Z) - MetaIQA: Deep Meta-learning for No-Reference Image Quality Assessment [73.55944459902041]
This paper presents a no-reference IQA metric based on deep meta-learning.
We first collect a number of NR-IQA tasks for different distortions.
Then meta-learning is adopted to learn the prior knowledge shared by diversified distortions.
Extensive experiments demonstrate that the proposed metric outperforms the state-of-the-arts by a large margin.
arXiv Detail & Related papers (2020-04-11T23:36:36Z) - Active Fine-Tuning from gMAD Examples Improves Blind Image Quality
Assessment [29.196117743753813]
We show that gMAD examples can be used to improve blind IQA (BIQA) methods.
Specifically, we first pre-train a DNN-based BIQA model using multiple noisy annotators.
We then seek pairs of images by comparing the baseline model with a set of full-reference IQA methods in gMAD.
arXiv Detail & Related papers (2020-03-08T21:19:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.