Non-Reference Quality Monitoring of Digital Images using Gradient
Statistics and Feedforward Neural Networks
- URL: http://arxiv.org/abs/2112.13893v1
- Date: Mon, 27 Dec 2021 20:21:55 GMT
- Title: Non-Reference Quality Monitoring of Digital Images using Gradient
Statistics and Feedforward Neural Networks
- Authors: Nisar Ahmed, Hafiz Muhammad Shahzad Asif, Hassan Khalid
- Abstract summary: A non-reference quality metric is proposed to assess the quality of digital images.
The proposed metric is computationally faster than its counterparts and can be used for the quality assessment of image sequences.
- Score: 0.1657441317977376
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Digital images contain a lot of redundancies, therefore, compressions are
applied to reduce the image size without the loss of reasonable image quality.
The same become more prominent in the case of videos that contains image
sequences and higher compression ratios are achieved in low throughput
networks. Assessment of the quality of images in such scenarios becomes of
particular interest. Subjective evaluation in most of the scenarios becomes
infeasible so objective evaluation is preferred. Among the three objective
quality measures, full-reference and reduced-reference methods require an
original image in some form to calculate the quality score which is not
feasible in scenarios such as broadcasting or IP video. Therefore, a
non-reference quality metric is proposed to assess the quality of digital
images which calculates luminance and multiscale gradient statistics along with
mean subtracted contrast normalized products as features to train a Feedforward
Neural Network with Scaled Conjugate Gradient. The trained network has provided
good regression and R2 measures and further testing on LIVE Image Quality
Assessment database release-2 has shown promising results. Pearson, Kendall,
and Spearman's correlation are calculated between predicted and actual quality
scores and their results are comparable to the state-of-the-art systems.
Moreover, the proposed metric is computationally faster than its counterparts
and can be used for the quality assessment of image sequences.
Related papers
- Fine-grained subjective visual quality assessment for high-fidelity compressed images [4.787528476079247]
The JPEG standardization project AIC is developing a subjective image quality assessment methodology for high-fidelity images.
This paper presents the proposed assessment methods, a dataset of high-quality compressed images, and their corresponding crowdsourced visual quality ratings.
It also outlines a data analysis approach that reconstructs quality scale values in just noticeable difference (JND) units.
arXiv Detail & Related papers (2024-10-12T11:37:19Z) - Attention Down-Sampling Transformer, Relative Ranking and Self-Consistency for Blind Image Quality Assessment [17.04649536069553]
No-reference image quality assessment is a challenging domain that addresses estimating image quality without the original reference.
We introduce an improved mechanism to extract local and non-local information from images via different transformer encoders and CNNs.
A self-consistency approach to self-supervision is presented, explicitly addressing the degradation of no-reference image quality assessment (NR-IQA) models.
arXiv Detail & Related papers (2024-09-11T09:08:43Z) - Adaptive Image Quality Assessment via Teaching Large Multimodal Model to Compare [99.57567498494448]
We introduce Compare2Score, an all-around LMM-based no-reference IQA model.
During training, we generate scaled-up comparative instructions by comparing images from the same IQA dataset.
Experiments on nine IQA datasets validate that the Compare2Score effectively bridges text-defined comparative levels during training.
arXiv Detail & Related papers (2024-05-29T17:26:09Z) - Dual-Branch Network for Portrait Image Quality Assessment [76.27716058987251]
We introduce a dual-branch network for portrait image quality assessment (PIQA)
We utilize two backbone networks (textiti.e., Swin Transformer-B) to extract the quality-aware features from the entire portrait image and the facial image cropped from it.
We leverage LIQE, an image scene classification and quality assessment model, to capture the quality-aware and scene-specific features as the auxiliary features.
arXiv Detail & Related papers (2024-05-14T12:43:43Z) - PIQI: Perceptual Image Quality Index based on Ensemble of Gaussian
Process Regression [2.9412539021452715]
Perceptual Image Quality Index (PIQI) is proposed to assess the quality of digital images.
The performance of the PIQI is checked on six benchmark databases and compared with twelve state-of-the-art methods.
arXiv Detail & Related papers (2023-05-16T06:44:17Z) - Re-IQA: Unsupervised Learning for Image Quality Assessment in the Wild [38.197794061203055]
We propose a Mixture of Experts approach to train two separate encoders to learn high-level content and low-level image quality features in an unsupervised setting.
We deploy the complementary low and high-level image representations obtained from the Re-IQA framework to train a linear regression model.
Our method achieves state-of-the-art performance on multiple large-scale image quality assessment databases.
arXiv Detail & Related papers (2023-04-02T05:06:51Z) - Image Quality Assessment using Contrastive Learning [50.265638572116984]
We train a deep Convolutional Neural Network (CNN) using a contrastive pairwise objective to solve the auxiliary problem.
We show through extensive experiments that CONTRIQUE achieves competitive performance when compared to state-of-the-art NR image quality models.
Our results suggest that powerful quality representations with perceptual relevance can be obtained without requiring large labeled subjective image quality datasets.
arXiv Detail & Related papers (2021-10-25T21:01:00Z) - Learning Conditional Knowledge Distillation for Degraded-Reference Image
Quality Assessment [157.1292674649519]
We propose a practical solution named degraded-reference IQA (DR-IQA)
DR-IQA exploits the inputs of IR models, degraded images, as references.
Our results can even be close to the performance of full-reference settings.
arXiv Detail & Related papers (2021-08-18T02:35:08Z) - Towards Unsupervised Deep Image Enhancement with Generative Adversarial
Network [92.01145655155374]
We present an unsupervised image enhancement generative network (UEGAN)
It learns the corresponding image-to-image mapping from a set of images with desired characteristics in an unsupervised manner.
Results show that the proposed model effectively improves the aesthetic quality of images.
arXiv Detail & Related papers (2020-12-30T03:22:46Z) - Perceptually Optimizing Deep Image Compression [53.705543593594285]
Mean squared error (MSE) and $ell_p$ norms have largely dominated the measurement of loss in neural networks.
We propose a different proxy approach to optimize image analysis networks against quantitative perceptual models.
arXiv Detail & Related papers (2020-07-03T14:33:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.