Study of Subjective and Objective Quality in Super-Resolution Enhanced Broadcast Images on a Novel SR-IQA Dataset
- URL: http://arxiv.org/abs/2409.17451v1
- Date: Thu, 26 Sep 2024 01:07:15 GMT
- Title: Study of Subjective and Objective Quality in Super-Resolution Enhanced Broadcast Images on a Novel SR-IQA Dataset
- Authors: Yongrok Kim, Junha Shin, Juhyun Lee, Hyunsuk Ko,
- Abstract summary: Super-Resolution (SR), a key consumer technology, is essential to display low-quality broadcast content on high-resolution screens in full-screen format.
evaluating the quality of SR images generated from low-quality sources, such as SR-enhanced broadcast content, is challenging.
We introduce a new IQA dataset for SR broadcast images in both 2K and 4K resolutions.
- Score: 4.770359059226373
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: To display low-quality broadcast content on high-resolution screens in full-screen format, the application of Super-Resolution (SR), a key consumer technology, is essential. Recently, SR methods have been developed that not only increase resolution while preserving the original image information but also enhance the perceived quality. However, evaluating the quality of SR images generated from low-quality sources, such as SR-enhanced broadcast content, is challenging due to the need to consider both distortions and improvements. Additionally, assessing SR image quality without original high-quality sources presents another significant challenge. Unfortunately, there has been a dearth of research specifically addressing the Image Quality Assessment (IQA) of SR images under these conditions. In this work, we introduce a new IQA dataset for SR broadcast images in both 2K and 4K resolutions. We conducted a subjective quality evaluation to obtain the Mean Opinion Score (MOS) for these SR images and performed a comprehensive human study to identify the key factors influencing the perceived quality. Finally, we evaluated the performance of existing IQA metrics on our dataset. This study reveals the limitations of current metrics, highlighting the need for a more robust IQA metric that better correlates with the perceived quality of SR images.
Related papers
- Q-Ground: Image Quality Grounding with Large Multi-modality Models [61.72022069880346]
We introduce Q-Ground, the first framework aimed at tackling fine-scale visual quality grounding.
Q-Ground combines large multi-modality models with detailed visual quality analysis.
Central to our contribution is the introduction of the QGround-100K dataset.
arXiv Detail & Related papers (2024-07-24T06:42:46Z) - Perception- and Fidelity-aware Reduced-Reference Super-Resolution Image Quality Assessment [25.88845910499606]
We propose a novel dual-branch reduced-reference SR-IQA network, ie, Perception- and Fidelity-aware SR-IQA (PFIQA)
PFIQA outperforms current state-of-the-art models across three widely-used SR-IQA benchmarks.
arXiv Detail & Related papers (2024-05-15T16:09:22Z) - Dual-Branch Network for Portrait Image Quality Assessment [76.27716058987251]
We introduce a dual-branch network for portrait image quality assessment (PIQA)
We utilize two backbone networks (textiti.e., Swin Transformer-B) to extract the quality-aware features from the entire portrait image and the facial image cropped from it.
We leverage LIQE, an image scene classification and quality assessment model, to capture the quality-aware and scene-specific features as the auxiliary features.
arXiv Detail & Related papers (2024-05-14T12:43:43Z) - Diffusion Model Based Visual Compensation Guidance and Visual Difference
Analysis for No-Reference Image Quality Assessment [82.13830107682232]
We propose a novel class of state-of-the-art (SOTA) generative model, which exhibits the capability to model intricate relationships.
We devise a new diffusion restoration network that leverages the produced enhanced image and noise-containing images.
Two visual evaluation branches are designed to comprehensively analyze the obtained high-level feature information.
arXiv Detail & Related papers (2024-02-22T09:39:46Z) - Scale Guided Hypernetwork for Blind Super-Resolution Image Quality
Assessment [2.4366811507669124]
Existing blind SR image quality assessment (IQA) metrics merely focus on visual characteristics of super-resolution images.
We propose a scale guided hypernetwork framework that evaluates SR image quality in a scale-adaptive manner.
arXiv Detail & Related papers (2023-06-04T16:17:19Z) - Textural-Structural Joint Learning for No-Reference Super-Resolution
Image Quality Assessment [59.91741119995321]
We develop a dual stream network to jointly explore the textural and structural information for quality prediction, dubbed TSNet.
By mimicking the human vision system (HVS) that pays more attention to the significant areas of the image, we develop the spatial attention mechanism to make the visual-sensitive areas more distinguishable.
Experimental results show the proposed TSNet predicts the visual quality more accurate than the state-of-the-art IQA methods, and demonstrates better consistency with the human's perspective.
arXiv Detail & Related papers (2022-05-27T09:20:06Z) - SPQE: Structure-and-Perception-Based Quality Evaluation for Image
Super-Resolution [24.584839578742237]
Super-Resolution technique has greatly improved the visual quality of images by enhancing their resolutions.
It also calls for an efficient SR Image Quality Assessment (SR-IQA) to evaluate those algorithms or their generated images.
In emerging deep-learning-based SR, a generated high-quality, visually pleasing image may have different structures from its corresponding low-quality image.
arXiv Detail & Related papers (2022-05-07T07:52:55Z) - Confusing Image Quality Assessment: Towards Better Augmented Reality
Experience [96.29124666702566]
We consider AR technology as the superimposition of virtual scenes and real scenes, and introduce visual confusion as its basic theory.
A ConFusing Image Quality Assessment (CFIQA) database is established, which includes 600 reference images and 300 distorted images generated by mixing reference images in pairs.
An objective metric termed CFIQA is also proposed to better evaluate the confusing image quality.
arXiv Detail & Related papers (2022-04-11T07:03:06Z) - Image Quality Assessment using Contrastive Learning [50.265638572116984]
We train a deep Convolutional Neural Network (CNN) using a contrastive pairwise objective to solve the auxiliary problem.
We show through extensive experiments that CONTRIQUE achieves competitive performance when compared to state-of-the-art NR image quality models.
Our results suggest that powerful quality representations with perceptual relevance can be obtained without requiring large labeled subjective image quality datasets.
arXiv Detail & Related papers (2021-10-25T21:01:00Z) - Learning-Based Quality Assessment for Image Super-Resolution [25.76907513611358]
We build a large-scale SR image database using a novel semi-automatic labeling approach.
The resulting SR Image quality database contains 8,400 images of 100 natural scenes.
We train an end-to-end Deep Image SR Quality (DISQ) model by employing two-stream Deep Neural Networks (DNNs) for feature extraction, followed by a feature fusion network for quality prediction.
arXiv Detail & Related papers (2020-12-16T04:06:27Z) - Blind Quality Assessment for Image Superresolution Using Deep Two-Stream
Convolutional Networks [41.558981828761574]
We propose a no-reference/blind deep neural network-based SR image quality assessor (DeepSRQ)
To learn more discriminative feature representations of various distorted SR images, the proposed DeepSRQ is a two-stream convolutional network.
Experimental results on three publicly available SR image quality databases demonstrate the effectiveness and generalization ability of our proposed DeepSRQ.
arXiv Detail & Related papers (2020-04-13T19:14:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.