Learning-Based Quality Assessment for Image Super-Resolution
- URL: http://arxiv.org/abs/2012.08732v1
- Date: Wed, 16 Dec 2020 04:06:27 GMT
- Title: Learning-Based Quality Assessment for Image Super-Resolution
- Authors: Tiesong Zhao, Yuting Lin, Yiwen Xu, Weiling Chen, Zhou Wang
- Abstract summary: We build a large-scale SR image database using a novel semi-automatic labeling approach.
The resulting SR Image quality database contains 8,400 images of 100 natural scenes.
We train an end-to-end Deep Image SR Quality (DISQ) model by employing two-stream Deep Neural Networks (DNNs) for feature extraction, followed by a feature fusion network for quality prediction.
- Score: 25.76907513611358
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Image Super-Resolution (SR) techniques improve visual quality by enhancing
the spatial resolution of images. Quality evaluation metrics play a critical
role in comparing and optimizing SR algorithms, but current metrics achieve
only limited success, largely due to the lack of large-scale quality databases,
which are essential for learning accurate and robust SR quality metrics. In
this work, we first build a large-scale SR image database using a novel
semi-automatic labeling approach, which allows us to label a large number of
images with manageable human workload. The resulting SR Image quality database
with Semi-Automatic Ratings (SISAR), so far the largest of SR-IQA database,
contains 8,400 images of 100 natural scenes. We train an end-to-end Deep Image
SR Quality (DISQ) model by employing two-stream Deep Neural Networks (DNNs) for
feature extraction, followed by a feature fusion network for quality
prediction. Experimental results demonstrate that the proposed method
outperforms state-of-the-art metrics and achieves promising generalization
performance in cross-database tests. The SISAR database and DISQ model will be
made publicly available to facilitate reproducible research.
Related papers
- Study of Subjective and Objective Quality in Super-Resolution Enhanced Broadcast Images on a Novel SR-IQA Dataset [4.770359059226373]
Super-Resolution (SR), a key consumer technology, is essential to display low-quality broadcast content on high-resolution screens in full-screen format.
evaluating the quality of SR images generated from low-quality sources, such as SR-enhanced broadcast content, is challenging.
We introduce a new IQA dataset for SR broadcast images in both 2K and 4K resolutions.
arXiv Detail & Related papers (2024-09-26T01:07:15Z) - Rethinking Image Super-Resolution from Training Data Perspectives [54.28824316574355]
We investigate the understudied effect of the training data used for image super-resolution (SR)
With this, we propose an automated image evaluation pipeline.
We find that datasets with (i) low compression artifacts, (ii) high within-image diversity as judged by the number of different objects, and (iii) a large number of images from ImageNet or PASS all positively affect SR performance.
arXiv Detail & Related papers (2024-09-01T16:25:04Z) - Q-Ground: Image Quality Grounding with Large Multi-modality Models [61.72022069880346]
We introduce Q-Ground, the first framework aimed at tackling fine-scale visual quality grounding.
Q-Ground combines large multi-modality models with detailed visual quality analysis.
Central to our contribution is the introduction of the QGround-100K dataset.
arXiv Detail & Related papers (2024-07-24T06:42:46Z) - Perception- and Fidelity-aware Reduced-Reference Super-Resolution Image Quality Assessment [25.88845910499606]
We propose a novel dual-branch reduced-reference SR-IQA network, ie, Perception- and Fidelity-aware SR-IQA (PFIQA)
PFIQA outperforms current state-of-the-art models across three widely-used SR-IQA benchmarks.
arXiv Detail & Related papers (2024-05-15T16:09:22Z) - Exploiting Self-Supervised Constraints in Image Super-Resolution [72.35265021054471]
This paper introduces a novel self-supervised constraint for single image super-resolution, termed SSC-SR.
SSC-SR uniquely addresses the divergence in image complexity by employing a dual asymmetric paradigm and a target model updated via exponential moving average to enhance stability.
Empirical evaluations reveal that our SSC-SR framework delivers substantial enhancements on a variety of benchmark datasets, achieving an average increase of 0.1 dB over EDSR and 0.06 dB over SwinIR.
arXiv Detail & Related papers (2024-03-30T06:18:50Z) - QMRNet: Quality Metric Regression for EO Image Quality Assessment and
Super-Resolution [2.425299069769717]
We benchmark state-of-the-art Super-Resolution (SR) algorithms for distinct Earth Observation (EO) datasets.
We also propose a novel Quality Metric Regression Network (QMRNet) that is able to predict quality (as a No-Reference metric) by training on any property of the image.
Overall benchmark shows promising results for LIIF, CAR and MSRN and also the potential use of QMRNet as Loss for optimizing SR predictions.
arXiv Detail & Related papers (2022-10-12T22:51:13Z) - A No-Reference Deep Learning Quality Assessment Method for
Super-resolution Images Based on Frequency Maps [39.58198651685851]
We propose a no-reference deep-learning image quality assessment method based on frequency maps.
We first obtain the high-frequency map (HM) and low-frequency map (LM) of SRI by using Sobel operator and piecewise smooth image approximation.
Our method outperforms all compared IQA models on the selected three super-resolution quality assessment (SRQA) databases.
arXiv Detail & Related papers (2022-06-09T05:43:37Z) - Hierarchical Similarity Learning for Aliasing Suppression Image
Super-Resolution [64.15915577164894]
A hierarchical image super-resolution network (HSRNet) is proposed to suppress the influence of aliasing.
HSRNet achieves better quantitative and visual performance than other works, and remits the aliasing more effectively.
arXiv Detail & Related papers (2022-06-07T14:55:32Z) - Textural-Structural Joint Learning for No-Reference Super-Resolution
Image Quality Assessment [59.91741119995321]
We develop a dual stream network to jointly explore the textural and structural information for quality prediction, dubbed TSNet.
By mimicking the human vision system (HVS) that pays more attention to the significant areas of the image, we develop the spatial attention mechanism to make the visual-sensitive areas more distinguishable.
Experimental results show the proposed TSNet predicts the visual quality more accurate than the state-of-the-art IQA methods, and demonstrates better consistency with the human's perspective.
arXiv Detail & Related papers (2022-05-27T09:20:06Z) - SPQE: Structure-and-Perception-Based Quality Evaluation for Image
Super-Resolution [24.584839578742237]
Super-Resolution technique has greatly improved the visual quality of images by enhancing their resolutions.
It also calls for an efficient SR Image Quality Assessment (SR-IQA) to evaluate those algorithms or their generated images.
In emerging deep-learning-based SR, a generated high-quality, visually pleasing image may have different structures from its corresponding low-quality image.
arXiv Detail & Related papers (2022-05-07T07:52:55Z) - Learning Transformer Features for Image Quality Assessment [53.51379676690971]
We propose a unified IQA framework that utilizes CNN backbone and transformer encoder to extract features.
The proposed framework is compatible with both FR and NR modes and allows for a joint training scheme.
arXiv Detail & Related papers (2021-12-01T13:23:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.