Textural-Structural Joint Learning for No-Reference Super-Resolution
Image Quality Assessment
- URL: http://arxiv.org/abs/2205.13847v1
- Date: Fri, 27 May 2022 09:20:06 GMT
- Title: Textural-Structural Joint Learning for No-Reference Super-Resolution
Image Quality Assessment
- Authors: Yuqing Liu, Qi Jia, Shanshe Wang, Siwei Ma and Wen Gao
- Abstract summary: We develop a dual stream network to jointly explore the textural and structural information for quality prediction, dubbed TSNet.
By mimicking the human vision system (HVS) that pays more attention to the significant areas of the image, we develop the spatial attention mechanism to make the visual-sensitive areas more distinguishable.
Experimental results show the proposed TSNet predicts the visual quality more accurate than the state-of-the-art IQA methods, and demonstrates better consistency with the human's perspective.
- Score: 59.91741119995321
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image super-resolution (SR) has been widely investigated in recent years.
However, it is challenging to fairly estimate the performances of various SR
methods, as the lack of reliable and accurate criteria for perceptual quality.
Existing SR image quality assessment (IQA) metrics usually concentrate on the
specific kind of degradation without distinguishing the visual sensitive areas,
which have no adaptive ability to describe the diverse SR degeneration
situations. In this paper, we focus on the textural and structural degradation
of image SR which acts as a critical role for visual perception, and design a
dual stream network to jointly explore the textural and structural information
for quality prediction, dubbed TSNet. By mimicking the human vision system
(HVS) that pays more attention to the significant areas of the image, we
develop the spatial attention mechanism to make the visual-sensitive areas more
distinguishable, which improves the prediction accuracy. Feature normalization
(F-Norm) is also developed to investigate the inherent spatial correlation of
SR features and boost the network representation capacity. Experimental results
show the proposed TSNet predicts the visual quality more accurate than the
state-of-the-art IQA methods, and demonstrates better consistency with the
human's perspective. The source code will be made available at
http://github.com/yuqing-liu-dut/NRIQA_SR.
Related papers
- Perception- and Fidelity-aware Reduced-Reference Super-Resolution Image Quality Assessment [25.88845910499606]
We propose a novel dual-branch reduced-reference SR-IQA network, ie, Perception- and Fidelity-aware SR-IQA (PFIQA)
PFIQA outperforms current state-of-the-art models across three widely-used SR-IQA benchmarks.
arXiv Detail & Related papers (2024-05-15T16:09:22Z) - Diffusion Model Based Visual Compensation Guidance and Visual Difference
Analysis for No-Reference Image Quality Assessment [82.13830107682232]
We propose a novel class of state-of-the-art (SOTA) generative model, which exhibits the capability to model intricate relationships.
We devise a new diffusion restoration network that leverages the produced enhanced image and noise-containing images.
Two visual evaluation branches are designed to comprehensively analyze the obtained high-level feature information.
arXiv Detail & Related papers (2024-02-22T09:39:46Z) - Scale Guided Hypernetwork for Blind Super-Resolution Image Quality
Assessment [2.4366811507669124]
Existing blind SR image quality assessment (IQA) metrics merely focus on visual characteristics of super-resolution images.
We propose a scale guided hypernetwork framework that evaluates SR image quality in a scale-adaptive manner.
arXiv Detail & Related papers (2023-06-04T16:17:19Z) - CiaoSR: Continuous Implicit Attention-in-Attention Network for
Arbitrary-Scale Image Super-Resolution [158.2282163651066]
This paper proposes a continuous implicit attention-in-attention network, called CiaoSR.
We explicitly design an implicit attention network to learn the ensemble weights for the nearby local features.
We embed a scale-aware attention in this implicit attention network to exploit additional non-local information.
arXiv Detail & Related papers (2022-12-08T15:57:46Z) - DeepWSD: Projecting Degradations in Perceptual Space to Wasserstein
Distance in Deep Feature Space [67.07476542850566]
We propose to model the quality degradation in perceptual space from a statistical distribution perspective.
The quality is measured based upon the Wasserstein distance in the deep feature domain.
The deep Wasserstein distance (DeepWSD) performed on features from neural networks enjoys better interpretability of the quality contamination.
arXiv Detail & Related papers (2022-08-05T02:46:12Z) - Hierarchical Similarity Learning for Aliasing Suppression Image
Super-Resolution [64.15915577164894]
A hierarchical image super-resolution network (HSRNet) is proposed to suppress the influence of aliasing.
HSRNet achieves better quantitative and visual performance than other works, and remits the aliasing more effectively.
arXiv Detail & Related papers (2022-06-07T14:55:32Z) - SPQE: Structure-and-Perception-Based Quality Evaluation for Image
Super-Resolution [24.584839578742237]
Super-Resolution technique has greatly improved the visual quality of images by enhancing their resolutions.
It also calls for an efficient SR Image Quality Assessment (SR-IQA) to evaluate those algorithms or their generated images.
In emerging deep-learning-based SR, a generated high-quality, visually pleasing image may have different structures from its corresponding low-quality image.
arXiv Detail & Related papers (2022-05-07T07:52:55Z) - Discovering "Semantics" in Super-Resolution Networks [54.45509260681529]
Super-resolution (SR) is a fundamental and representative task of low-level vision area.
It is generally thought that the features extracted from the SR network have no specific semantic information.
Can we find any "semantics" in SR networks?
arXiv Detail & Related papers (2021-08-01T09:12:44Z) - Learning-Based Quality Assessment for Image Super-Resolution [25.76907513611358]
We build a large-scale SR image database using a novel semi-automatic labeling approach.
The resulting SR Image quality database contains 8,400 images of 100 natural scenes.
We train an end-to-end Deep Image SR Quality (DISQ) model by employing two-stream Deep Neural Networks (DNNs) for feature extraction, followed by a feature fusion network for quality prediction.
arXiv Detail & Related papers (2020-12-16T04:06:27Z) - Blind Quality Assessment for Image Superresolution Using Deep Two-Stream
Convolutional Networks [41.558981828761574]
We propose a no-reference/blind deep neural network-based SR image quality assessor (DeepSRQ)
To learn more discriminative feature representations of various distorted SR images, the proposed DeepSRQ is a two-stream convolutional network.
Experimental results on three publicly available SR image quality databases demonstrate the effectiveness and generalization ability of our proposed DeepSRQ.
arXiv Detail & Related papers (2020-04-13T19:14:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.