FAVER: Blind Quality Prediction of Variable Frame Rate Videos
- URL: http://arxiv.org/abs/2201.01492v1
- Date: Wed, 5 Jan 2022 07:54:12 GMT
- Title: FAVER: Blind Quality Prediction of Variable Frame Rate Videos
- Authors: Qi Zheng, Zhengzhong Tu, Pavan C. Madhusudana, Xiaoyang Zeng, Alan C.
Bovik, Yibo Fan
- Abstract summary: Video quality assessment (VQA) remains an important and challenging problem that affects many applications at the widest scales.
We propose a first-of-a-kind blind VQA model for evaluating HFR videos, which we dub the Framerate-Aware Video Evaluator w/o Reference (FAVER)
Our experiments on several HFR video quality datasets show that FAVER outperforms other blind VQA algorithms at a reasonable computational cost.
- Score: 47.951054608064126
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Video quality assessment (VQA) remains an important and challenging problem
that affects many applications at the widest scales. Recent advances in mobile
devices and cloud computing techniques have made it possible to capture,
process, and share high resolution, high frame rate (HFR) videos across the
Internet nearly instantaneously. Being able to monitor and control the quality
of these streamed videos can enable the delivery of more enjoyable content and
perceptually optimized rate control. Accordingly, there is a pressing need to
develop VQA models that can be deployed at enormous scales. While some recent
effects have been applied to full-reference (FR) analysis of variable frame
rate and HFR video quality, the development of no-reference (NR) VQA algorithms
targeting frame rate variations has been little studied. Here, we propose a
first-of-a-kind blind VQA model for evaluating HFR videos, which we dub the
Framerate-Aware Video Evaluator w/o Reference (FAVER). FAVER uses extended
models of spatial natural scene statistics that encompass space-time
wavelet-decomposed video signals, to conduct efficient frame rate sensitive
quality prediction. Our extensive experiments on several HFR video quality
datasets show that FAVER outperforms other blind VQA algorithms at a reasonable
computational cost. To facilitate reproducible research and public evaluation,
an implementation of FAVER is being made freely available online:
\url{https://github.com/uniqzheng/HFR-BVQA}.
Related papers
- Making Video Quality Assessment Models Sensitive to Frame Rate
Distortions [63.749184706461826]
We consider the problem of capturing distortions arising from changes in frame rate as part of Video Quality Assessment (VQA)
We propose a simple fusion framework, whereby temporal features from GREED are combined with existing VQA models.
Our results suggest that employing efficient temporal representations can result much more robust and accurate VQA models.
arXiv Detail & Related papers (2022-05-21T04:13:57Z) - High Frame Rate Video Quality Assessment using VMAF and Entropic
Differences [50.265638572116984]
The popularity of streaming videos with live, high-action content has led to an increased interest in High Frame Rate (HFR) videos.
In this work we address the problem of frame rate dependent Video Quality Assessment (VQA) when the videos to be compared have different frame rate and compression factor.
We show through various experiments that the proposed fusion framework results in more efficient features for predicting frame rate dependent video quality.
arXiv Detail & Related papers (2021-09-27T04:08:12Z) - FOVQA: Blind Foveated Video Quality Assessment [1.4127304025810108]
We develop a no-reference (NR) foveated video quality assessment model, called FOVQA.
It is based on new models of space-variant natural scene statistics (NSS) and natural video statistics (NVS)
FOVQA achieves state-of-the-art (SOTA) performance on the new 2D LIVE-FBT-FCVR database.
arXiv Detail & Related papers (2021-06-24T21:38:22Z) - ST-GREED: Space-Time Generalized Entropic Differences for Frame Rate
Dependent Video Quality Prediction [63.749184706461826]
We study how perceptual quality is affected by frame rate, and how frame rate and compression combine to affect perceived quality.
We devise an objective VQA model called Space-Time GeneRalized Entropic Difference (GREED) which analyzes the statistics of spatial and temporal band-pass video coefficients.
GREED achieves state-of-the-art performance on the LIVE-YT-HFR Database when compared with existing VQA models.
arXiv Detail & Related papers (2020-10-26T16:54:33Z) - Subjective and Objective Quality Assessment of High Frame Rate Videos [60.970191379802095]
High frame rate (HFR) videos are becoming increasingly common with the tremendous popularity of live, high-action streaming content such as sports.
Live-YT-HFR dataset is comprised of 480 videos having 6 different frame rates, obtained from 16 diverse contents.
To obtain subjective labels on the videos, we conducted a human study yielding 19,000 human quality ratings obtained from a pool of 85 human subjects.
arXiv Detail & Related papers (2020-07-22T19:11:42Z) - Capturing Video Frame Rate Variations via Entropic Differencing [63.749184706461826]
We propose a novel statistical entropic differencing method based on a Generalized Gaussian Distribution model.
Our proposed model correlates very well with subjective scores in the recently proposed LIVE-YT-HFR database.
arXiv Detail & Related papers (2020-06-19T22:16:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.