FOVQA: Blind Foveated Video Quality Assessment
- URL: http://arxiv.org/abs/2106.13328v1
- Date: Thu, 24 Jun 2021 21:38:22 GMT
- Title: FOVQA: Blind Foveated Video Quality Assessment
- Authors: Yize Jin, Anjul Patney, Richard Webb, Alan Bovik
- Abstract summary: We develop a no-reference (NR) foveated video quality assessment model, called FOVQA.
It is based on new models of space-variant natural scene statistics (NSS) and natural video statistics (NVS)
FOVQA achieves state-of-the-art (SOTA) performance on the new 2D LIVE-FBT-FCVR database.
- Score: 1.4127304025810108
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Previous blind or No Reference (NR) video quality assessment (VQA) models
largely rely on features drawn from natural scene statistics (NSS), but under
the assumption that the image statistics are stationary in the spatial domain.
Several of these models are quite successful on standard pictures. However, in
Virtual Reality (VR) applications, foveated video compression is regaining
attention, and the concept of space-variant quality assessment is of interest,
given the availability of increasingly high spatial and temporal resolution
contents and practical ways of measuring gaze direction. Distortions from
foveated video compression increase with increased eccentricity, implying that
the natural scene statistics are space-variant. Towards advancing the
development of foveated compression / streaming algorithms, we have devised a
no-reference (NR) foveated video quality assessment model, called FOVQA, which
is based on new models of space-variant natural scene statistics (NSS) and
natural video statistics (NVS). Specifically, we deploy a space-variant
generalized Gaussian distribution (SV-GGD) model and a space-variant
asynchronous generalized Gaussian distribution (SV-AGGD) model of mean
subtracted contrast normalized (MSCN) coefficients and products of neighboring
MSCN coefficients, respectively. We devise a foveated video quality predictor
that extracts radial basis features, and other features that capture
perceptually annoying rapid quality fall-offs. We find that FOVQA achieves
state-of-the-art (SOTA) performance on the new 2D LIVE-FBT-FCVR database, as
compared with other leading FIQA / VQA models. we have made our implementation
of FOVQA available at: http://live.ece.utexas.edu/research/Quality/FOVQA.zip.
Related papers
- Enhancing Blind Video Quality Assessment with Rich Quality-aware Features [79.18772373737724]
We present a simple but effective method to enhance blind video quality assessment (BVQA) models for social media videos.
We explore rich quality-aware features from pre-trained blind image quality assessment (BIQA) and BVQA models as auxiliary features.
Experimental results demonstrate that the proposed model achieves the best performance on three public social media VQA datasets.
arXiv Detail & Related papers (2024-05-14T16:32:11Z) - DeepDC: Deep Distance Correlation as a Perceptual Image Quality
Evaluator [53.57431705309919]
ImageNet pre-trained deep neural networks (DNNs) show notable transferability for building effective image quality assessment (IQA) models.
We develop a novel full-reference IQA (FR-IQA) model based exclusively on pre-trained DNN features.
We conduct comprehensive experiments to demonstrate the superiority of the proposed quality model on five standard IQA datasets.
arXiv Detail & Related papers (2022-11-09T14:57:27Z) - Exploring the Effectiveness of Video Perceptual Representation in Blind
Video Quality Assessment [55.65173181828863]
We propose a temporal perceptual quality index (TPQI) to measure the temporal distortion by describing the graphic morphology of the representation.
Experiments show that TPQI is an effective way of predicting subjective temporal quality.
arXiv Detail & Related papers (2022-07-08T07:30:51Z) - A Deep Learning based No-reference Quality Assessment Model for UGC
Videos [44.00578772367465]
Previous video quality assessment (VQA) studies either use the image recognition model or the image quality assessment (IQA) models to extract frame-level features of videos for quality regression.
We propose a very simple but effective VQA model, which trains an end-to-end spatial feature extraction network to learn the quality-aware spatial feature representation from raw pixels of the video frames.
With the better quality-aware features, we only use the simple multilayer perception layer (MLP) network to regress them into the chunk-level quality scores, and then the temporal average pooling strategy is adopted to obtain the video
arXiv Detail & Related papers (2022-04-29T12:45:21Z) - FAVER: Blind Quality Prediction of Variable Frame Rate Videos [47.951054608064126]
Video quality assessment (VQA) remains an important and challenging problem that affects many applications at the widest scales.
We propose a first-of-a-kind blind VQA model for evaluating HFR videos, which we dub the Framerate-Aware Video Evaluator w/o Reference (FAVER)
Our experiments on several HFR video quality datasets show that FAVER outperforms other blind VQA algorithms at a reasonable computational cost.
arXiv Detail & Related papers (2022-01-05T07:54:12Z) - Evaluating Foveated Video Quality Using Entropic Differencing [1.5877673959068452]
We propose a full reference (FR) foveated image quality assessment algorithm, which employs the natural scene statistics of bandpass responses.
We evaluate the proposed algorithm by measuring the correlations of the predictions that FED makes against human judgements.
The performance of the proposed algorithm yields state-of-the-art as compared with other existing full reference algorithms.
arXiv Detail & Related papers (2021-06-12T16:29:13Z) - Study on the Assessment of the Quality of Experience of Streaming Video [117.44028458220427]
In this paper, the influence of various objective factors on the subjective estimation of the QoE of streaming video is studied.
The paper presents standard and handcrafted features, shows their correlation and p-Value of significance.
We take SQoE-III database, so far the largest and most realistic of its kind.
arXiv Detail & Related papers (2020-12-08T18:46:09Z) - ST-GREED: Space-Time Generalized Entropic Differences for Frame Rate
Dependent Video Quality Prediction [63.749184706461826]
We study how perceptual quality is affected by frame rate, and how frame rate and compression combine to affect perceived quality.
We devise an objective VQA model called Space-Time GeneRalized Entropic Difference (GREED) which analyzes the statistics of spatial and temporal band-pass video coefficients.
GREED achieves state-of-the-art performance on the LIVE-YT-HFR Database when compared with existing VQA models.
arXiv Detail & Related papers (2020-10-26T16:54:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.