Scale Guided Hypernetwork for Blind Super-Resolution Image Quality
Assessment
- URL: http://arxiv.org/abs/2306.02398v1
- Date: Sun, 4 Jun 2023 16:17:19 GMT
- Title: Scale Guided Hypernetwork for Blind Super-Resolution Image Quality
Assessment
- Authors: Jun Fu
- Abstract summary: Existing blind SR image quality assessment (IQA) metrics merely focus on visual characteristics of super-resolution images.
We propose a scale guided hypernetwork framework that evaluates SR image quality in a scale-adaptive manner.
- Score: 2.4366811507669124
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: With the emergence of image super-resolution (SR) algorithm, how to blindly
evaluate the quality of super-resolution images has become an urgent task.
However, existing blind SR image quality assessment (IQA) metrics merely focus
on visual characteristics of super-resolution images, ignoring the available
scale information. In this paper, we reveal that the scale factor has a
statistically significant impact on subjective quality scores of SR images,
indicating that the scale information can be used to guide the task of blind SR
IQA. Motivated by this, we propose a scale guided hypernetwork framework that
evaluates SR image quality in a scale-adaptive manner. Specifically, the blind
SR IQA procedure is divided into three stages, i.e., content perception,
evaluation rule generation, and quality prediction. After content perception, a
hypernetwork generates the evaluation rule used in quality prediction based on
the scale factor of the SR image. We apply the proposed scale guided
hypernetwork framework to existing representative blind IQA metrics, and
experimental results show that the proposed framework not only boosts the
performance of these IQA metrics but also enhances their generalization
abilities. Source code will be available at https://github.com/JunFu1995/SGH.
Related papers
- Study of Subjective and Objective Quality in Super-Resolution Enhanced Broadcast Images on a Novel SR-IQA Dataset [4.770359059226373]
Super-Resolution (SR), a key consumer technology, is essential to display low-quality broadcast content on high-resolution screens in full-screen format.
evaluating the quality of SR images generated from low-quality sources, such as SR-enhanced broadcast content, is challenging.
We introduce a new IQA dataset for SR broadcast images in both 2K and 4K resolutions.
arXiv Detail & Related papers (2024-09-26T01:07:15Z) - Perception- and Fidelity-aware Reduced-Reference Super-Resolution Image Quality Assessment [25.88845910499606]
We propose a novel dual-branch reduced-reference SR-IQA network, ie, Perception- and Fidelity-aware SR-IQA (PFIQA)
PFIQA outperforms current state-of-the-art models across three widely-used SR-IQA benchmarks.
arXiv Detail & Related papers (2024-05-15T16:09:22Z) - Adaptive Feature Selection for No-Reference Image Quality Assessment by Mitigating Semantic Noise Sensitivity [55.399230250413986]
We propose a Quality-Aware Feature Matching IQA Metric (QFM-IQM) to remove harmful semantic noise features from the upstream task.
Our approach achieves superior performance to the state-of-the-art NR-IQA methods on eight standard IQA datasets.
arXiv Detail & Related papers (2023-12-11T06:50:27Z) - Blind Image Quality Assessment via Vision-Language Correspondence: A
Multitask Learning Perspective [93.56647950778357]
Blind image quality assessment (BIQA) predicts the human perception of image quality without any reference information.
We develop a general and automated multitask learning scheme for BIQA to exploit auxiliary knowledge from other tasks.
arXiv Detail & Related papers (2023-03-27T07:58:09Z) - CADyQ: Content-Aware Dynamic Quantization for Image Super-Resolution [55.50793823060282]
We propose a novel Content-Aware Dynamic Quantization (CADyQ) method for image super-resolution (SR) networks.
CADyQ allocates optimal bits to local regions and layers adaptively based on the local contents of an input image.
The pipeline has been tested on various SR networks and evaluated on several standard benchmarks.
arXiv Detail & Related papers (2022-07-21T07:50:50Z) - Textural-Structural Joint Learning for No-Reference Super-Resolution
Image Quality Assessment [59.91741119995321]
We develop a dual stream network to jointly explore the textural and structural information for quality prediction, dubbed TSNet.
By mimicking the human vision system (HVS) that pays more attention to the significant areas of the image, we develop the spatial attention mechanism to make the visual-sensitive areas more distinguishable.
Experimental results show the proposed TSNet predicts the visual quality more accurate than the state-of-the-art IQA methods, and demonstrates better consistency with the human's perspective.
arXiv Detail & Related papers (2022-05-27T09:20:06Z) - SPQE: Structure-and-Perception-Based Quality Evaluation for Image
Super-Resolution [24.584839578742237]
Super-Resolution technique has greatly improved the visual quality of images by enhancing their resolutions.
It also calls for an efficient SR Image Quality Assessment (SR-IQA) to evaluate those algorithms or their generated images.
In emerging deep-learning-based SR, a generated high-quality, visually pleasing image may have different structures from its corresponding low-quality image.
arXiv Detail & Related papers (2022-05-07T07:52:55Z) - Learning Transformer Features for Image Quality Assessment [53.51379676690971]
We propose a unified IQA framework that utilizes CNN backbone and transformer encoder to extract features.
The proposed framework is compatible with both FR and NR modes and allows for a joint training scheme.
arXiv Detail & Related papers (2021-12-01T13:23:00Z) - Image Quality Assessment using Contrastive Learning [50.265638572116984]
We train a deep Convolutional Neural Network (CNN) using a contrastive pairwise objective to solve the auxiliary problem.
We show through extensive experiments that CONTRIQUE achieves competitive performance when compared to state-of-the-art NR image quality models.
Our results suggest that powerful quality representations with perceptual relevance can be obtained without requiring large labeled subjective image quality datasets.
arXiv Detail & Related papers (2021-10-25T21:01:00Z) - Learning-Based Quality Assessment for Image Super-Resolution [25.76907513611358]
We build a large-scale SR image database using a novel semi-automatic labeling approach.
The resulting SR Image quality database contains 8,400 images of 100 natural scenes.
We train an end-to-end Deep Image SR Quality (DISQ) model by employing two-stream Deep Neural Networks (DNNs) for feature extraction, followed by a feature fusion network for quality prediction.
arXiv Detail & Related papers (2020-12-16T04:06:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.