No-Reference Image Quality Assessment via Feature Fusion and Multi-Task
Learning
- URL: http://arxiv.org/abs/2006.03783v1
- Date: Sat, 6 Jun 2020 05:04:10 GMT
- Title: No-Reference Image Quality Assessment via Feature Fusion and Multi-Task
Learning
- Authors: S. Alireza Golestaneh, Kris Kitani
- Abstract summary: Blind or no-reference image quality assessment (NR-IQA) is a fundamental, unsolved, and yet challenging problem.
We propose a simple and yet effective general-purpose no-reference (NR) image quality assessment framework based on multi-task learning.
Our model employs distortion types as well as subjective human scores to predict image quality.
- Score: 29.19484863898778
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Blind or no-reference image quality assessment (NR-IQA) is a fundamental,
unsolved, and yet challenging problem due to the unavailability of a reference
image. It is vital to the streaming and social media industries that impact
billions of viewers daily. Although previous NR-IQA methods leveraged different
feature extraction approaches, the performance bottleneck still exists. In this
paper, we propose a simple and yet effective general-purpose no-reference (NR)
image quality assessment (IQA) framework based on multi-task learning. Our
model employs distortion types as well as subjective human scores to predict
image quality. We propose a feature fusion method to utilize distortion
information to improve the quality score estimation task. In our experiments,
we demonstrate that by utilizing multi-task learning and our proposed feature
fusion method, our model yields better performance for the NR-IQA task. To
demonstrate the effectiveness of our approach, we test our approach on seven
standard datasets and show that we achieve state-of-the-art results on various
datasets.
Related papers
- Descriptive Image Quality Assessment in the Wild [25.503311093471076]
VLM-based Image Quality Assessment (IQA) seeks to describe image quality linguistically to align with human expression.
We introduce Depicted image Quality Assessment in the Wild (DepictQA-Wild)
Our method includes a multi-functional IQA task paradigm that encompasses both assessment and comparison tasks, brief and detailed responses, full-reference and non-reference scenarios.
arXiv Detail & Related papers (2024-05-29T07:49:15Z) - Opinion-Unaware Blind Image Quality Assessment using Multi-Scale Deep Feature Statistics [54.08757792080732]
We propose integrating deep features from pre-trained visual models with a statistical analysis model to achieve opinion-unaware BIQA (OU-BIQA)
Our proposed model exhibits superior consistency with human visual perception compared to state-of-the-art BIQA models.
arXiv Detail & Related papers (2024-05-29T06:09:34Z) - Multi-Modal Prompt Learning on Blind Image Quality Assessment [65.0676908930946]
Image Quality Assessment (IQA) models benefit significantly from semantic information, which allows them to treat different types of objects distinctly.
Traditional methods, hindered by a lack of sufficiently annotated data, have employed the CLIP image-text pretraining model as their backbone to gain semantic awareness.
Recent approaches have attempted to address this mismatch using prompt technology, but these solutions have shortcomings.
This paper introduces an innovative multi-modal prompt-based methodology for IQA.
arXiv Detail & Related papers (2024-04-23T11:45:32Z) - Learning Generalizable Perceptual Representations for Data-Efficient
No-Reference Image Quality Assessment [7.291687946822539]
A major drawback of state-of-the-art NR-IQA techniques is their reliance on a large number of human annotations.
We enable the learning of low-level quality features to distortion types by introducing a novel quality-aware contrastive loss.
We design zero-shot quality predictions from both pathways in a completely blind setting.
arXiv Detail & Related papers (2023-12-08T05:24:21Z) - Less is More: Learning Reference Knowledge Using No-Reference Image
Quality Assessment [58.09173822651016]
We argue that it is possible to learn reference knowledge under the No-Reference Image Quality Assessment setting.
We propose a new framework to learn comparative knowledge from non-aligned reference images.
Experiments on eight standard NR-IQA datasets demonstrate the superior performance to the state-of-the-art NR-IQA methods.
arXiv Detail & Related papers (2023-12-01T13:56:01Z) - Assessor360: Multi-sequence Network for Blind Omnidirectional Image
Quality Assessment [50.82681686110528]
Blind Omnidirectional Image Quality Assessment (BOIQA) aims to objectively assess the human perceptual quality of omnidirectional images (ODIs)
The quality assessment of ODIs is severely hampered by the fact that the existing BOIQA pipeline lacks the modeling of the observer's browsing process.
We propose a novel multi-sequence network for BOIQA called Assessor360, which is derived from the realistic multi-assessor ODI quality assessment procedure.
arXiv Detail & Related papers (2023-05-18T13:55:28Z) - MSTRIQ: No Reference Image Quality Assessment Based on Swin Transformer
with Multi-Stage Fusion [8.338999282303755]
We propose a novel algorithm based on the Swin Transformer.
It aggregates information from both local and global features to better predict the quality.
It ranks 2nd in the no-reference track of NTIRE 2022 Perceptual Image Quality Assessment Challenge.
arXiv Detail & Related papers (2022-05-20T11:34:35Z) - Learning Transformer Features for Image Quality Assessment [53.51379676690971]
We propose a unified IQA framework that utilizes CNN backbone and transformer encoder to extract features.
The proposed framework is compatible with both FR and NR modes and allows for a joint training scheme.
arXiv Detail & Related papers (2021-12-01T13:23:00Z) - Image Quality Assessment using Contrastive Learning [50.265638572116984]
We train a deep Convolutional Neural Network (CNN) using a contrastive pairwise objective to solve the auxiliary problem.
We show through extensive experiments that CONTRIQUE achieves competitive performance when compared to state-of-the-art NR image quality models.
Our results suggest that powerful quality representations with perceptual relevance can be obtained without requiring large labeled subjective image quality datasets.
arXiv Detail & Related papers (2021-10-25T21:01:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.