Capturing Video Frame Rate Variations via Entropic Differencing
- URL: http://arxiv.org/abs/2006.11424v2
- Date: Wed, 21 Oct 2020 01:02:00 GMT
- Title: Capturing Video Frame Rate Variations via Entropic Differencing
- Authors: Pavan C. Madhusudana, Neil Birkbeck, Yilin Wang, Balu Adsumilli, Alan
C. Bovik
- Abstract summary: We propose a novel statistical entropic differencing method based on a Generalized Gaussian Distribution model.
Our proposed model correlates very well with subjective scores in the recently proposed LIVE-YT-HFR database.
- Score: 63.749184706461826
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: High frame rate videos are increasingly getting popular in recent years,
driven by the strong requirements of the entertainment and streaming industries
to provide high quality of experiences to consumers. To achieve the best
trade-offs between the bandwidth requirements and video quality in terms of
frame rate adaptation, it is imperative to understand the effects of frame rate
on video quality. In this direction, we devise a novel statistical entropic
differencing method based on a Generalized Gaussian Distribution model
expressed in the spatial and temporal band-pass domains, which measures the
difference in quality between reference and distorted videos. The proposed
design is highly generalizable and can be employed when the reference and
distorted sequences have different frame rates. Our proposed model correlates
very well with subjective scores in the recently proposed LIVE-YT-HFR database
and achieves state of the art performance when compared with existing
methodologies.
Related papers
- ResQ: Residual Quantization for Video Perception [18.491197847596283]
We propose a novel quantization scheme for video networks coined as Residual Quantization.
We extend our model to dynamically adjust the bit-width proportional to the amount of changes in the video.
arXiv Detail & Related papers (2023-08-18T12:41:10Z) - VIDM: Video Implicit Diffusion Models [75.90225524502759]
Diffusion models have emerged as a powerful generative method for synthesizing high-quality and diverse set of images.
We propose a video generation method based on diffusion models, where the effects of motion are modeled in an implicit condition.
We improve the quality of the generated videos by proposing multiple strategies such as sampling space truncation, robustness penalty, and positional group normalization.
arXiv Detail & Related papers (2022-12-01T02:58:46Z) - Making Video Quality Assessment Models Sensitive to Frame Rate
Distortions [63.749184706461826]
We consider the problem of capturing distortions arising from changes in frame rate as part of Video Quality Assessment (VQA)
We propose a simple fusion framework, whereby temporal features from GREED are combined with existing VQA models.
Our results suggest that employing efficient temporal representations can result much more robust and accurate VQA models.
arXiv Detail & Related papers (2022-05-21T04:13:57Z) - FAVER: Blind Quality Prediction of Variable Frame Rate Videos [47.951054608064126]
Video quality assessment (VQA) remains an important and challenging problem that affects many applications at the widest scales.
We propose a first-of-a-kind blind VQA model for evaluating HFR videos, which we dub the Framerate-Aware Video Evaluator w/o Reference (FAVER)
Our experiments on several HFR video quality datasets show that FAVER outperforms other blind VQA algorithms at a reasonable computational cost.
arXiv Detail & Related papers (2022-01-05T07:54:12Z) - High Frame Rate Video Quality Assessment using VMAF and Entropic
Differences [50.265638572116984]
The popularity of streaming videos with live, high-action content has led to an increased interest in High Frame Rate (HFR) videos.
In this work we address the problem of frame rate dependent Video Quality Assessment (VQA) when the videos to be compared have different frame rate and compression factor.
We show through various experiments that the proposed fusion framework results in more efficient features for predicting frame rate dependent video quality.
arXiv Detail & Related papers (2021-09-27T04:08:12Z) - ST-GREED: Space-Time Generalized Entropic Differences for Frame Rate
Dependent Video Quality Prediction [63.749184706461826]
We study how perceptual quality is affected by frame rate, and how frame rate and compression combine to affect perceived quality.
We devise an objective VQA model called Space-Time GeneRalized Entropic Difference (GREED) which analyzes the statistics of spatial and temporal band-pass video coefficients.
GREED achieves state-of-the-art performance on the LIVE-YT-HFR Database when compared with existing VQA models.
arXiv Detail & Related papers (2020-10-26T16:54:33Z) - Efficient Semantic Video Segmentation with Per-frame Inference [117.97423110566963]
In this work, we process efficient semantic video segmentation in a per-frame fashion during the inference process.
We employ compact models for real-time execution. To narrow the performance gap between compact models and large models, new knowledge distillation methods are designed.
arXiv Detail & Related papers (2020-02-26T12:24:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.