Data-Efficient Image Quality Assessment with Attention-Panel Decoder
- URL: http://arxiv.org/abs/2304.04952v1
- Date: Tue, 11 Apr 2023 03:52:17 GMT
- Title: Data-Efficient Image Quality Assessment with Attention-Panel Decoder
- Authors: Guanyi Qin, Runze Hu, Yutao Liu, Xiawu Zheng, Haotian Liu, Xiu Li, Yan
Zhang
- Abstract summary: Blind Image Quality Assessment (BIQA) is a fundamental task in computer vision, which remains unresolved due to the complex distortion conditions and diversified image contents.
We propose a novel BIQA pipeline based on the Transformer architecture, which achieves an efficient quality-aware feature representation with much fewer data.
- Score: 19.987556370430806
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Blind Image Quality Assessment (BIQA) is a fundamental task in computer
vision, which however remains unresolved due to the complex distortion
conditions and diversified image contents. To confront this challenge, we in
this paper propose a novel BIQA pipeline based on the Transformer architecture,
which achieves an efficient quality-aware feature representation with much
fewer data. More specifically, we consider the traditional fine-tuning in BIQA
as an interpretation of the pre-trained model. In this way, we further
introduce a Transformer decoder to refine the perceptual information of the CLS
token from different perspectives. This enables our model to establish the
quality-aware feature manifold efficiently while attaining a strong
generalization capability. Meanwhile, inspired by the subjective evaluation
behaviors of human, we introduce a novel attention panel mechanism, which
improves the model performance and reduces the prediction uncertainty
simultaneously. The proposed BIQA method maintains a lightweight design with
only one layer of the decoder, yet extensive experiments on eight standard BIQA
datasets (both synthetic and authentic) demonstrate its superior performance to
the state-of-the-art BIQA methods, i.e., achieving the SRCC values of 0.875
(vs. 0.859 in LIVEC) and 0.980 (vs. 0.969 in LIVE).
Related papers
- Boosting CLIP Adaptation for Image Quality Assessment via Meta-Prompt Learning and Gradient Regularization [55.09893295671917]
This paper introduces a novel Gradient-Regulated Meta-Prompt IQA Framework (GRMP-IQA)
The GRMP-IQA comprises two key modules: Meta-Prompt Pre-training Module and Quality-Aware Gradient Regularization.
Experiments on five standard BIQA datasets demonstrate the superior performance to the state-of-the-art BIQA methods under limited data setting.
arXiv Detail & Related papers (2024-09-09T07:26:21Z) - Multi-Modal Prompt Learning on Blind Image Quality Assessment [65.0676908930946]
Image Quality Assessment (IQA) models benefit significantly from semantic information, which allows them to treat different types of objects distinctly.
Traditional methods, hindered by a lack of sufficiently annotated data, have employed the CLIP image-text pretraining model as their backbone to gain semantic awareness.
Recent approaches have attempted to address this mismatch using prompt technology, but these solutions have shortcomings.
This paper introduces an innovative multi-modal prompt-based methodology for IQA.
arXiv Detail & Related papers (2024-04-23T11:45:32Z) - A Lightweight Parallel Framework for Blind Image Quality Assessment [7.9562077122537875]
We propose a lightweight parallel framework (LPF) for blind image quality assessment (BIQA)
First, we extract the visual features using a pre-trained feature extraction network. Furthermore, we construct a simple yet effective feature embedding network (FEN) to transform the visual features.
We present two novel self-supervised subtasks, including a sample-level category prediction task and a batch-level quality comparison task.
arXiv Detail & Related papers (2024-02-19T10:56:58Z) - Feature Denoising Diffusion Model for Blind Image Quality Assessment [58.5808754919597]
Blind Image Quality Assessment (BIQA) aims to evaluate image quality in line with human perception, without reference benchmarks.
Deep learning BIQA methods typically depend on using features from high-level tasks for transfer learning.
In this paper, we take an initial step towards exploring the diffusion model for feature denoising in BIQA.
arXiv Detail & Related papers (2024-01-22T13:38:24Z) - Less is More: Learning Reference Knowledge Using No-Reference Image
Quality Assessment [58.09173822651016]
We argue that it is possible to learn reference knowledge under the No-Reference Image Quality Assessment setting.
We propose a new framework to learn comparative knowledge from non-aligned reference images.
Experiments on eight standard NR-IQA datasets demonstrate the superior performance to the state-of-the-art NR-IQA methods.
arXiv Detail & Related papers (2023-12-01T13:56:01Z) - Task-Specific Normalization for Continual Learning of Blind Image
Quality Models [105.03239956378465]
We present a simple yet effective continual learning method for blind image quality assessment (BIQA)
The key step in our approach is to freeze all convolution filters of a pre-trained deep neural network (DNN) for an explicit promise of stability.
We assign each new IQA dataset (i.e., task) a prediction head, and load the corresponding normalization parameters to produce a quality score.
The final quality estimate is computed by black a weighted summation of predictions from all heads with a lightweight $K$-means gating mechanism.
arXiv Detail & Related papers (2021-07-28T15:21:01Z) - Uncertainty-Aware Blind Image Quality Assessment in the Laboratory and
Wild [98.48284827503409]
We develop a textitunified BIQA model and an approach of training it for both synthetic and realistic distortions.
We employ the fidelity loss to optimize a deep neural network for BIQA over a large number of such image pairs.
Experiments on six IQA databases show the promise of the learned method in blindly assessing image quality in the laboratory and wild.
arXiv Detail & Related papers (2020-05-28T13:35:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.