A combined full-reference image quality assessment approach based on
convolutional activation maps
- URL: http://arxiv.org/abs/2010.09361v3
- Date: Thu, 3 Dec 2020 05:01:40 GMT
- Title: A combined full-reference image quality assessment approach based on
convolutional activation maps
- Authors: Domonkos Varga
- Abstract summary: The goal of full-reference image quality assessment (FR-IQA) is to predict the quality of an image as perceived by human observers with using its pristine, reference counterpart.
In this study, we explore a novel, combined approach which predicts the perceptual quality of a distorted image by compiling a feature vector from convolutional activation maps.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The goal of full-reference image quality assessment (FR-IQA) is to predict
the quality of an image as perceived by human observers with using its
pristine, reference counterpart. In this study, we explore a novel, combined
approach which predicts the perceptual quality of a distorted image by
compiling a feature vector from convolutional activation maps. More
specifically, a reference-distorted image pair is run through a pretrained
convolutional neural network and the activation maps are compared with a
traditional image similarity metric. Subsequently, the resulted feature vector
is mapped onto perceptual quality scores with the help of a trained support
vector regressor. A detailed parameter study is also presented in which the
design choices of the proposed method is reasoned. Furthermore, we study the
relationship between the amount of training images and the prediction
performance. Specifically, it is demonstrated that the proposed method can be
trained with few amount of data to reach high prediction performance. Our best
proposal - ActMapFeat - is compared to the state-of-the-art on six publicly
available benchmark IQA databases, such as KADID-10k, TID2013, TID2008, MDID,
CSIQ, and VCL-FER. Specifically, our method is able to significantly outperform
the state-of-the-art on these benchmark databases.
Related papers
- Opinion-Unaware Blind Image Quality Assessment using Multi-Scale Deep Feature Statistics [54.08757792080732]
We propose integrating deep features from pre-trained visual models with a statistical analysis model to achieve opinion-unaware BIQA (OU-BIQA)
Our proposed model exhibits superior consistency with human visual perception compared to state-of-the-art BIQA models.
arXiv Detail & Related papers (2024-05-29T06:09:34Z) - Comparison of No-Reference Image Quality Models via MAP Estimation in
Diffusion Latents [99.19391983670569]
We show that NR-IQA models can be plugged into the maximum a posteriori (MAP) estimation framework for image enhancement.
Different NR-IQA models are likely to induce different enhanced images, which are ultimately subject to psychophysical testing.
This leads to a new computational method for comparing NR-IQA models within the analysis-by-synthesis framework.
arXiv Detail & Related papers (2024-03-11T03:35:41Z) - Test-time Distribution Learning Adapter for Cross-modal Visual Reasoning [16.998833621046117]
We propose Test-Time Distribution LearNing Adapter (TT-DNA) which directly works during the testing period.
Specifically, we estimate Gaussian distributions to model visual features of the few-shot support images to capture the knowledge from the support set.
Our extensive experimental results on visual reasoning for human object interaction demonstrate that our proposed TT-DNA outperforms existing state-of-the-art methods by large margins.
arXiv Detail & Related papers (2024-03-10T01:34:45Z) - DeepDC: Deep Distance Correlation as a Perceptual Image Quality
Evaluator [53.57431705309919]
ImageNet pre-trained deep neural networks (DNNs) show notable transferability for building effective image quality assessment (IQA) models.
We develop a novel full-reference IQA (FR-IQA) model based exclusively on pre-trained DNN features.
We conduct comprehensive experiments to demonstrate the superiority of the proposed quality model on five standard IQA datasets.
arXiv Detail & Related papers (2022-11-09T14:57:27Z) - CONVIQT: Contrastive Video Quality Estimator [63.749184706461826]
Perceptual video quality assessment (VQA) is an integral component of many streaming and video sharing platforms.
Here we consider the problem of learning perceptually relevant video quality representations in a self-supervised manner.
Our results indicate that compelling representations with perceptual bearing can be obtained using self-supervised learning.
arXiv Detail & Related papers (2022-06-29T15:22:01Z) - CR-FIQA: Face Image Quality Assessment by Learning Sample Relative
Classifiability [2.3624125155742055]
We propose a novel learning paradigm that learns internal network observations during the training process.
Our proposed CR-FIQA uses this paradigm to estimate the face image quality of a sample by predicting its relative classifiability.
We demonstrate the superiority of our proposed CR-FIQA over state-of-the-art (SOTA) FIQA algorithms.
arXiv Detail & Related papers (2021-12-13T12:18:43Z) - Learning Transformer Features for Image Quality Assessment [53.51379676690971]
We propose a unified IQA framework that utilizes CNN backbone and transformer encoder to extract features.
The proposed framework is compatible with both FR and NR modes and allows for a joint training scheme.
arXiv Detail & Related papers (2021-12-01T13:23:00Z) - No-Reference Image Quality Assessment by Hallucinating Pristine Features [24.35220427707458]
We propose a no-reference (NR) image quality assessment (IQA) method via feature level pseudo-reference (PR) hallucination.
The effectiveness of our proposed method is demonstrated on four popular IQA databases.
arXiv Detail & Related papers (2021-08-09T16:48:34Z) - CAMERAS: Enhanced Resolution And Sanity preserving Class Activation
Mapping for image saliency [61.40511574314069]
Backpropagation image saliency aims at explaining model predictions by estimating model-centric importance of individual pixels in the input.
We propose CAMERAS, a technique to compute high-fidelity backpropagation saliency maps without requiring any external priors.
arXiv Detail & Related papers (2021-06-20T08:20:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.