Making Video Quality Assessment Models Sensitive to Frame Rate
Distortions
- URL: http://arxiv.org/abs/2205.10501v1
- Date: Sat, 21 May 2022 04:13:57 GMT
- Title: Making Video Quality Assessment Models Sensitive to Frame Rate
Distortions
- Authors: Pavan C. Madhusudana, Neil Birkbeck, Yilin Wang, Balu Adsumilli and
Alan C. Bovik
- Abstract summary: We consider the problem of capturing distortions arising from changes in frame rate as part of Video Quality Assessment (VQA)
We propose a simple fusion framework, whereby temporal features from GREED are combined with existing VQA models.
Our results suggest that employing efficient temporal representations can result much more robust and accurate VQA models.
- Score: 63.749184706461826
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the problem of capturing distortions arising from changes in
frame rate as part of Video Quality Assessment (VQA). Variable frame rate (VFR)
videos have become much more common, and streamed videos commonly range from 30
frames per second (fps) up to 120 fps. VFR-VQA offers unique challenges in
terms of distortion types as well as in making non-uniform comparisons of
reference and distorted videos having different frame rates. The majority of
current VQA models require compared videos to be of the same frame rate, but
are unable to adequately account for frame rate artifacts. The recently
proposed Generalized Entropic Difference (GREED) VQA model succeeds at this
task, using natural video statistics models of entropic differences of temporal
band-pass coefficients, delivering superior performance on predicting video
quality changes arising from frame rate distortions. Here we propose a simple
fusion framework, whereby temporal features from GREED are combined with
existing VQA models, towards improving model sensitivity towards frame rate
distortions. We find through extensive experiments that this feature fusion
significantly boosts model performance on both HFR/VFR datasets as well as
fixed frame rate (FFR) VQA databases. Our results suggest that employing
efficient temporal representations can result much more robust and accurate VQA
models when frame rate variations can occur.
Related papers
- DisCoVQA: Temporal Distortion-Content Transformers for Video Quality
Assessment [56.42140467085586]
Some temporal variations are causing temporal distortions and lead to extra quality degradations.
Human visual system often has different attention to frames with different contents.
We propose a novel and effective transformer-based VQA method to tackle these two issues.
arXiv Detail & Related papers (2022-06-20T15:31:27Z) - FAVER: Blind Quality Prediction of Variable Frame Rate Videos [47.951054608064126]
Video quality assessment (VQA) remains an important and challenging problem that affects many applications at the widest scales.
We propose a first-of-a-kind blind VQA model for evaluating HFR videos, which we dub the Framerate-Aware Video Evaluator w/o Reference (FAVER)
Our experiments on several HFR video quality datasets show that FAVER outperforms other blind VQA algorithms at a reasonable computational cost.
arXiv Detail & Related papers (2022-01-05T07:54:12Z) - FREGAN : an application of generative adversarial networks in enhancing
the frame rate of videos [1.1688030627514534]
FREGAN (Frame Rate Enhancement Generative Adversarial Network) model has been proposed, which predicts future frames of a video sequence based on a sequence of past frames.
We have validated the effectiveness of the proposed model on the standard datasets.
The experimental outcomes illustrate that the proposed model has a Peak signal-to-noise ratio (PSNR) of 34.94 and a Structural Similarity Index (SSIM) of 0.95.
arXiv Detail & Related papers (2021-11-01T17:19:00Z) - High Frame Rate Video Quality Assessment using VMAF and Entropic
Differences [50.265638572116984]
The popularity of streaming videos with live, high-action content has led to an increased interest in High Frame Rate (HFR) videos.
In this work we address the problem of frame rate dependent Video Quality Assessment (VQA) when the videos to be compared have different frame rate and compression factor.
We show through various experiments that the proposed fusion framework results in more efficient features for predicting frame rate dependent video quality.
arXiv Detail & Related papers (2021-09-27T04:08:12Z) - ST-GREED: Space-Time Generalized Entropic Differences for Frame Rate
Dependent Video Quality Prediction [63.749184706461826]
We study how perceptual quality is affected by frame rate, and how frame rate and compression combine to affect perceived quality.
We devise an objective VQA model called Space-Time GeneRalized Entropic Difference (GREED) which analyzes the statistics of spatial and temporal band-pass video coefficients.
GREED achieves state-of-the-art performance on the LIVE-YT-HFR Database when compared with existing VQA models.
arXiv Detail & Related papers (2020-10-26T16:54:33Z) - Capturing Video Frame Rate Variations via Entropic Differencing [63.749184706461826]
We propose a novel statistical entropic differencing method based on a Generalized Gaussian Distribution model.
Our proposed model correlates very well with subjective scores in the recently proposed LIVE-YT-HFR database.
arXiv Detail & Related papers (2020-06-19T22:16:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.