Deep Superpixel-based Network for Blind Image Quality Assessment
- URL: http://arxiv.org/abs/2110.06564v1
- Date: Wed, 13 Oct 2021 08:26:58 GMT
- Title: Deep Superpixel-based Network for Blind Image Quality Assessment
- Authors: Guangyi Yang, Yang Zhan. and Yuxuan Wang
- Abstract summary: The goal in a blind image quality assessment (BIQA) model is to simulate the process of evaluating images by human eyes.
We propose a deep adaptive superpixel-based network, namely DSN-IQA, to assess the quality of image based on multi-scale and superpixel segmentation.
- Score: 4.079861933099766
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The goal in a blind image quality assessment (BIQA) model is to simulate the
process of evaluating images by human eyes and accurately assess the quality of
the image. Although many approaches effectively identify degradation, they do
not fully consider the semantic content in images resulting in distortion. In
order to fill this gap, we propose a deep adaptive superpixel-based network,
namely DSN-IQA, to assess the quality of image based on multi-scale and
superpixel segmentation. The DSN-IQA can adaptively accept arbitrary scale
images as input images, making the assessment process similar to human
perception. The network uses two models to extract multi-scale semantic
features and generate a superpixel adjacency map. These two elements are united
together via feature fusion to accurately predict image quality. Experimental
results on different benchmark databases demonstrate that our algorithm is
highly competitive with other approaches when assessing challenging authentic
image databases. Also, due to adaptive deep superpixel-based network, our model
accurately assesses images with complicated distortion, much like the human
eye.
Related papers
- Q-Ground: Image Quality Grounding with Large Multi-modality Models [61.72022069880346]
We introduce Q-Ground, the first framework aimed at tackling fine-scale visual quality grounding.
Q-Ground combines large multi-modality models with detailed visual quality analysis.
Central to our contribution is the introduction of the QGround-100K dataset.
arXiv Detail & Related papers (2024-07-24T06:42:46Z) - Dual-Branch Network for Portrait Image Quality Assessment [76.27716058987251]
We introduce a dual-branch network for portrait image quality assessment (PIQA)
We utilize two backbone networks (textiti.e., Swin Transformer-B) to extract the quality-aware features from the entire portrait image and the facial image cropped from it.
We leverage LIQE, an image scene classification and quality assessment model, to capture the quality-aware and scene-specific features as the auxiliary features.
arXiv Detail & Related papers (2024-05-14T12:43:43Z) - ARNIQA: Learning Distortion Manifold for Image Quality Assessment [28.773037051085318]
No-Reference Image Quality Assessment (NR-IQA) aims to develop methods to measure image quality in alignment with human perception without the need for a high-quality reference image.
We propose a self-supervised approach named ARNIQA for modeling the image distortion manifold to obtain quality representations in an intrinsic manner.
arXiv Detail & Related papers (2023-10-20T17:22:25Z) - Assessor360: Multi-sequence Network for Blind Omnidirectional Image
Quality Assessment [50.82681686110528]
Blind Omnidirectional Image Quality Assessment (BOIQA) aims to objectively assess the human perceptual quality of omnidirectional images (ODIs)
The quality assessment of ODIs is severely hampered by the fact that the existing BOIQA pipeline lacks the modeling of the observer's browsing process.
We propose a novel multi-sequence network for BOIQA called Assessor360, which is derived from the realistic multi-assessor ODI quality assessment procedure.
arXiv Detail & Related papers (2023-05-18T13:55:28Z) - Image Quality Assessment using Contrastive Learning [50.265638572116984]
We train a deep Convolutional Neural Network (CNN) using a contrastive pairwise objective to solve the auxiliary problem.
We show through extensive experiments that CONTRIQUE achieves competitive performance when compared to state-of-the-art NR image quality models.
Our results suggest that powerful quality representations with perceptual relevance can be obtained without requiring large labeled subjective image quality datasets.
arXiv Detail & Related papers (2021-10-25T21:01:00Z) - MUSIQ: Multi-scale Image Quality Transformer [22.908901641767688]
Current state-of-the-art IQA methods are based on convolutional neural networks (CNNs)
We design a multi-scale image quality Transformer (MUSIQ) to process native resolution images with varying sizes and aspect ratios.
With a multi-scale image representation, our proposed method can capture image quality at different granularities.
arXiv Detail & Related papers (2021-08-12T23:36:22Z) - Compound Frechet Inception Distance for Quality Assessment of GAN
Created Images [7.628527132779575]
One notable application of GANs is developing fake human faces, also known as "deep fakes"
Measuring the quality of the generated images is inherently subjective but attempts to objectify quality using standardized metrics have been made.
We propose to improve the robustness of the evaluation process by integrating lower-level features to cover a wider array of visual defects.
arXiv Detail & Related papers (2021-06-16T06:53:27Z) - Towards Unsupervised Deep Image Enhancement with Generative Adversarial
Network [92.01145655155374]
We present an unsupervised image enhancement generative network (UEGAN)
It learns the corresponding image-to-image mapping from a set of images with desired characteristics in an unsupervised manner.
Results show that the proposed model effectively improves the aesthetic quality of images.
arXiv Detail & Related papers (2020-12-30T03:22:46Z) - Deep Multi-Scale Features Learning for Distorted Image Quality
Assessment [20.7146855562825]
Existing deep neural networks (DNNs) have shown significant effectiveness for tackling the IQA problem.
We propose to use pyramid features learning to build a DNN with hierarchical multi-scale features for distorted image quality prediction.
Our proposed network is optimized in a deep end-to-end supervision manner.
arXiv Detail & Related papers (2020-12-01T23:39:01Z) - Uncertainty-Aware Blind Image Quality Assessment in the Laboratory and
Wild [98.48284827503409]
We develop a textitunified BIQA model and an approach of training it for both synthetic and realistic distortions.
We employ the fidelity loss to optimize a deep neural network for BIQA over a large number of such image pairs.
Experiments on six IQA databases show the promise of the learned method in blindly assessing image quality in the laboratory and wild.
arXiv Detail & Related papers (2020-05-28T13:35:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.