Frequency-domain Blind Quality Assessment of Blurred and
Blocking-artefact Images using Gaussian Process Regression model
- URL: http://arxiv.org/abs/2303.02753v1
- Date: Sun, 5 Mar 2023 19:20:55 GMT
- Title: Frequency-domain Blind Quality Assessment of Blurred and
Blocking-artefact Images using Gaussian Process Regression model
- Authors: Maryam Viqar, Athar A. Moinuddin, Ekram Khan, M. Ghanbari
- Abstract summary: Most of the standard image and video codecs are block-based and depending upon the compression ratio the compressed images/videos suffer from different distortions.
This paper proposes a methodology to blindly measure overall quality of an image suffering from these distortions, individually as well as jointly.
It is relatively fast compared to many state-of-art methods, and therefore is suitable for real-time quality monitoring applications.
- Score: 0.5735035463793008
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Most of the standard image and video codecs are block-based and depending
upon the compression ratio the compressed images/videos suffer from different
distortions. At low ratios, blurriness is observed and as compression increases
blocking artifacts occur. Generally, in order to reduce blockiness, images are
low-pass filtered which leads to more blurriness. Also, in bokeh mode images
they are commonly seen: blurriness as a result of intentional blurred
background while blocking artifact and global blurriness arising due to
compression. Therefore, such visual media suffer from both blockiness and
blurriness distortions. Along with this, noise is also commonly encountered
distortion. Most of the existing works on quality assessment quantify these
distortions individually. This paper proposes a methodology to blindly measure
overall quality of an image suffering from these distortions, individually as
well as jointly. This is achieved by considering the sum of absolute values of
low and high-frequency Discrete Frequency Transform (DFT) coefficients defined
as sum magnitudes. The number of blocks lying in specific ranges of sum
magnitudes including zero-valued AC coefficients and mean of 100 maximum and
100 minimum values of these sum magnitudes are used as feature vectors. These
features are then fed to the Machine Learning (ML) based Gaussian Process
Regression (GPR) model, which quantifies the image quality. The simulation
results show that the proposed method can estimate the quality of images
distorted with the blockiness, blurriness, noise and their combinations. It is
relatively fast compared to many state-of-art methods, and therefore is
suitable for real-time quality monitoring applications.
Related papers
- Real-World Efficient Blind Motion Deblurring via Blur Pixel Discretization [45.20189929583484]
We decompose the deblurring (regression) task into blur pixel discretization and discrete-to-continuous conversion tasks.
Specifically, we generate the discretized image residual errors by identifying the blur pixels and then transform them to a continuous form.
arXiv Detail & Related papers (2024-04-18T13:22:56Z) - Deep Generative Model based Rate-Distortion for Image Downscaling Assessment [19.952415887709154]
We propose Image Downscaling Assessment by Rate-Distortion (IDA-RD)
IDA-RD is a novel measure to quantitatively evaluate image downscaling algorithms.
arXiv Detail & Related papers (2024-03-22T11:48:09Z) - Arbitrary-Scale Image Generation and Upsampling using Latent Diffusion Model and Implicit Neural Decoder [29.924160271522354]
Super-resolution (SR) and image generation are important tasks in computer vision and are widely adopted in real-world applications.
Most existing methods, however, generate images only at fixed-scale magnification and suffer from over-smoothing and artifacts.
Most relevant work applied Implicit Neural Representation (INR) to the denoising diffusion model to obtain continuous-resolution yet diverse and high-quality SR results.
We propose a novel pipeline that can super-resolve an input image or generate from a random noise a novel image at arbitrary scales.
arXiv Detail & Related papers (2024-03-15T12:45:40Z) - Semantic Ensemble Loss and Latent Refinement for High-Fidelity Neural Image Compression [58.618625678054826]
This study presents an enhanced neural compression method designed for optimal visual fidelity.
We have trained our model with a sophisticated semantic ensemble loss, integrating Charbonnier loss, perceptual loss, style loss, and a non-binary adversarial loss.
Our empirical findings demonstrate that this approach significantly improves the statistical fidelity of neural image compression.
arXiv Detail & Related papers (2024-01-25T08:11:27Z) - Learning-Based and Quality Preserving Super-Resolution of Noisy Images [0.0]
We propose a learning-based method that accounts for the presence of noise and preserves the properties of the input image.
We perform our tests on the Cineca Marconi100 cluster, at the 26th position in the top500 list.
arXiv Detail & Related papers (2023-11-03T22:00:50Z) - PixelPyramids: Exact Inference Models from Lossless Image Pyramids [58.949070311990916]
Pixel-Pyramids is a block-autoregressive approach with scale-specific representations to encode the joint distribution of image pixels.
It yields state-of-the-art results for density estimation on various image datasets, especially for high-resolution data.
For CelebA-HQ 1024 x 1024, we observe that the density estimates are improved to 44% of the baseline despite sampling speeds superior even to easily parallelizable flow-based models.
arXiv Detail & Related papers (2021-10-17T10:47:29Z) - Low-Light Image Enhancement with Normalizing Flow [92.52290821418778]
In this paper, we investigate to model this one-to-many relationship via a proposed normalizing flow model.
An invertible network that takes the low-light images/features as the condition and learns to map the distribution of normally exposed images into a Gaussian distribution.
The experimental results on the existing benchmark datasets show our method achieves better quantitative and qualitative results, obtaining better-exposed illumination, less noise and artifact, and richer colors.
arXiv Detail & Related papers (2021-09-13T12:45:08Z) - Hierarchical Conditional Flow: A Unified Framework for Image
Super-Resolution and Image Rescaling [139.25215100378284]
We propose a hierarchical conditional flow (HCFlow) as a unified framework for image SR and image rescaling.
HCFlow learns a mapping between HR and LR image pairs by modelling the distribution of the LR image and the rest high-frequency component simultaneously.
To further enhance the performance, other losses such as perceptual loss and GAN loss are combined with the commonly used negative log-likelihood loss in training.
arXiv Detail & Related papers (2021-08-11T16:11:01Z) - Designing a Practical Degradation Model for Deep Blind Image
Super-Resolution [134.9023380383406]
Single image super-resolution (SISR) methods would not perform well if the assumed degradation model deviates from those in real images.
This paper proposes to design a more complex but practical degradation model that consists of randomly shuffled blur, downsampling and noise degradations.
arXiv Detail & Related papers (2021-03-25T17:40:53Z) - Quantization Guided JPEG Artifact Correction [69.04777875711646]
We develop a novel architecture for artifact correction using the JPEG files quantization matrix.
This allows our single model to achieve state-of-the-art performance over models trained for specific quality settings.
arXiv Detail & Related papers (2020-04-17T00:10:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.