Quality-aware Pre-trained Models for Blind Image Quality Assessment
- URL: http://arxiv.org/abs/2303.00521v2
- Date: Thu, 23 Mar 2023 06:57:56 GMT
- Title: Quality-aware Pre-trained Models for Blind Image Quality Assessment
- Authors: Kai Zhao, Kun Yuan, Ming Sun, Mading Li and Xing Wen
- Abstract summary: Blind image quality assessment (BIQA) aims to automatically evaluate the perceived quality of a single image.
In this paper, we propose to solve the problem by a pretext task customized for BIQA in a self-supervised learning manner.
- Score: 15.566552014530938
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Blind image quality assessment (BIQA) aims to automatically evaluate the
perceived quality of a single image, whose performance has been improved by
deep learning-based methods in recent years. However, the paucity of labeled
data somewhat restrains deep learning-based BIQA methods from unleashing their
full potential. In this paper, we propose to solve the problem by a pretext
task customized for BIQA in a self-supervised learning manner, which enables
learning representations from orders of magnitude more data. To constrain the
learning process, we propose a quality-aware contrastive loss based on a simple
assumption: the quality of patches from a distorted image should be similar,
but vary from patches from the same image with different degradations and
patches from different images. Further, we improve the existing degradation
process and form a degradation space with the size of roughly $2\times10^7$.
After pre-trained on ImageNet using our method, models are more sensitive to
image quality and perform significantly better on downstream BIQA tasks.
Experimental results show that our method obtains remarkable improvements on
popular BIQA datasets.
Related papers
- DP-IQA: Utilizing Diffusion Prior for Blind Image Quality Assessment in the Wild [54.139923409101044]
Blind image quality assessment (IQA) in the wild presents significant challenges.
Given the difficulty in collecting large-scale training data, leveraging limited data to develop a model with strong generalization remains an open problem.
Motivated by the robust image perception capabilities of pre-trained text-to-image (T2I) diffusion models, we propose a novel IQA method, diffusion priors-based IQA.
arXiv Detail & Related papers (2024-05-30T12:32:35Z) - Descriptive Image Quality Assessment in the Wild [25.503311093471076]
VLM-based Image Quality Assessment (IQA) seeks to describe image quality linguistically to align with human expression.
We introduce Depicted image Quality Assessment in the Wild (DepictQA-Wild)
Our method includes a multi-functional IQA task paradigm that encompasses both assessment and comparison tasks, brief and detailed responses, full-reference and non-reference scenarios.
arXiv Detail & Related papers (2024-05-29T07:49:15Z) - Dual-Branch Network for Portrait Image Quality Assessment [76.27716058987251]
We introduce a dual-branch network for portrait image quality assessment (PIQA)
We utilize two backbone networks (textiti.e., Swin Transformer-B) to extract the quality-aware features from the entire portrait image and the facial image cropped from it.
We leverage LIQE, an image scene classification and quality assessment model, to capture the quality-aware and scene-specific features as the auxiliary features.
arXiv Detail & Related papers (2024-05-14T12:43:43Z) - QGFace: Quality-Guided Joint Training For Mixed-Quality Face Recognition [2.8519768339207356]
We propose a novel quality-guided joint training approach for mixed-quality face recognition.
Based on quality partition, classification-based method is employed for HQ data learning.
For the LQ images which lack identity information, we learn them with self-supervised image-image contrastive learning.
arXiv Detail & Related papers (2023-12-29T06:56:22Z) - Helping Visually Impaired People Take Better Quality Pictures [52.03016269364854]
We develop tools to help visually impaired users minimize occurrences of common technical distortions.
We also create a prototype feedback system that helps to guide users to mitigate quality issues.
arXiv Detail & Related papers (2023-05-14T04:37:53Z) - MSTRIQ: No Reference Image Quality Assessment Based on Swin Transformer
with Multi-Stage Fusion [8.338999282303755]
We propose a novel algorithm based on the Swin Transformer.
It aggregates information from both local and global features to better predict the quality.
It ranks 2nd in the no-reference track of NTIRE 2022 Perceptual Image Quality Assessment Challenge.
arXiv Detail & Related papers (2022-05-20T11:34:35Z) - Conformer and Blind Noisy Students for Improved Image Quality Assessment [80.57006406834466]
Learning-based approaches for perceptual image quality assessment (IQA) usually require both the distorted and reference image for measuring the perceptual quality accurately.
In this work, we explore the performance of transformer-based full-reference IQA models.
We also propose a method for IQA based on semi-supervised knowledge distillation from full-reference teacher models into blind student models.
arXiv Detail & Related papers (2022-04-27T10:21:08Z) - Continual Learning for Blind Image Quality Assessment [80.55119990128419]
Blind image quality assessment (BIQA) models fail to continually adapt to subpopulation shift.
Recent work suggests training BIQA methods on the combination of all available human-rated IQA datasets.
We formulate continual learning for BIQA, where a model learns continually from a stream of IQA datasets.
arXiv Detail & Related papers (2021-02-19T03:07:01Z) - No-Reference Image Quality Assessment via Feature Fusion and Multi-Task
Learning [29.19484863898778]
Blind or no-reference image quality assessment (NR-IQA) is a fundamental, unsolved, and yet challenging problem.
We propose a simple and yet effective general-purpose no-reference (NR) image quality assessment framework based on multi-task learning.
Our model employs distortion types as well as subjective human scores to predict image quality.
arXiv Detail & Related papers (2020-06-06T05:04:10Z) - Uncertainty-Aware Blind Image Quality Assessment in the Laboratory and
Wild [98.48284827503409]
We develop a textitunified BIQA model and an approach of training it for both synthetic and realistic distortions.
We employ the fidelity loss to optimize a deep neural network for BIQA over a large number of such image pairs.
Experiments on six IQA databases show the promise of the learned method in blindly assessing image quality in the laboratory and wild.
arXiv Detail & Related papers (2020-05-28T13:35:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.