FloLPIPS: A Bespoke Video Quality Metric for Frame Interpoation
- URL: http://arxiv.org/abs/2207.08119v2
- Date: Thu, 22 Jun 2023 12:51:58 GMT
- Title: FloLPIPS: A Bespoke Video Quality Metric for Frame Interpoation
- Authors: Duolikun Danier, Fan Zhang, David Bull
- Abstract summary: We present a bespoke full reference video quality metric for VFI, FloLPIPS, that builds on the popular perceptual image quality metric, LPIPS.
FloLPIPS shows superior correlation performance with subjective ground truth over 12 popular quality assessors.
- Score: 4.151439675744056
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Video frame interpolation (VFI) serves as a useful tool for many video
processing applications. Recently, it has also been applied in the video
compression domain for enhancing both conventional video codecs and
learning-based compression architectures. While there has been an increased
focus on the development of enhanced frame interpolation algorithms in recent
years, the perceptual quality assessment of interpolated content remains an
open field of research. In this paper, we present a bespoke full reference
video quality metric for VFI, FloLPIPS, that builds on the popular perceptual
image quality metric, LPIPS, which captures the perceptual degradation in
extracted image feature space. In order to enhance the performance of LPIPS for
evaluating interpolated content, we re-designed its spatial feature aggregation
step by using the temporal distortion (through comparing optical flows) to
weight the feature difference maps. Evaluated on the BVI-VFI database, which
contains 180 test sequences with various frame interpolation artefacts,
FloLPIPS shows superior correlation performance (with statistical significance)
with subjective ground truth over 12 popular quality assessors. To facilitate
further research in VFI quality assessment, our code is publicly available at
https://danier97.github.io/FloLPIPS.
Related papers
- CLIPVQA:Video Quality Assessment via CLIP [56.94085651315878]
We propose an efficient CLIP-based Transformer method for the VQA problem ( CLIPVQA)
The proposed CLIPVQA achieves new state-of-the-art VQA performance and up to 37% better generalizability than existing benchmark VQA methods.
arXiv Detail & Related papers (2024-07-06T02:32:28Z) - Video Dynamics Prior: An Internal Learning Approach for Robust Video
Enhancements [83.5820690348833]
We present a framework for low-level vision tasks that does not require any external training data corpus.
Our approach learns neural modules by optimizing over a corrupted sequence, leveraging the weights of the coherence-temporal test and statistics internal statistics.
arXiv Detail & Related papers (2023-12-13T01:57:11Z) - A Perceptual Quality Metric for Video Frame Interpolation [6.743340926667941]
As video frame results often unique artifacts, existing quality metrics sometimes are not consistent with human perception when measuring the results.
Some recent deep learning-based quality metrics are shown more consistent with human judgments, but their performance on videos is compromised since they do not consider temporal information.
Our method learns perceptual features directly from videos instead of individual frames.
arXiv Detail & Related papers (2022-10-04T19:56:10Z) - BVI-VFI: A Video Quality Database for Video Frame Interpolation [3.884484241124158]
Video frame (VFI) is a fundamental research topic in video processing.
BVI-VFI contains 540 distorted sequences generated by applying five commonly used VFI algorithms.
We benchmarked the performance of 33 classic and state-of-the-art objective image/video quality metrics on the new database.
arXiv Detail & Related papers (2022-10-03T11:15:05Z) - Exploring the Effectiveness of Video Perceptual Representation in Blind
Video Quality Assessment [55.65173181828863]
We propose a temporal perceptual quality index (TPQI) to measure the temporal distortion by describing the graphic morphology of the representation.
Experiments show that TPQI is an effective way of predicting subjective temporal quality.
arXiv Detail & Related papers (2022-07-08T07:30:51Z) - PeQuENet: Perceptual Quality Enhancement of Compressed Video with
Adaptation- and Attention-based Network [27.375830262287163]
We propose a generative adversarial network (GAN) framework to enhance the perceptual quality of compressed videos.
Our framework includes attention and adaptation to different quantization parameters (QPs) in a single model.
Experimental results demonstrate the superior performance of the proposed PeQuENet compared with the state-of-the-art compressed video quality enhancement algorithms.
arXiv Detail & Related papers (2022-06-16T02:49:28Z) - A Subjective Quality Study for Video Frame Interpolation [4.151439675744056]
We describe a subjective quality study for video frame (VFI) based on a newly developed video database, BVI-VFI.
BVI-VFI contains 36 reference sequences at three different frame rates and 180 distorted videos generated using five conventional and learning based VFI algorithms.
arXiv Detail & Related papers (2022-02-15T21:13:23Z) - Multi-Frame Quality Enhancement On Compressed Video Using Quantised Data
of Deep Belief Networks [0.0]
In the age of streaming and surveillance compressed video enhancement has become a problem in need of constant improvement.
This approach consists of making use of the frames that have the peak quality in the region to improve those that have a lower quality in that region.
arXiv Detail & Related papers (2022-01-27T09:14:57Z) - High Frame Rate Video Quality Assessment using VMAF and Entropic
Differences [50.265638572116984]
The popularity of streaming videos with live, high-action content has led to an increased interest in High Frame Rate (HFR) videos.
In this work we address the problem of frame rate dependent Video Quality Assessment (VQA) when the videos to be compared have different frame rate and compression factor.
We show through various experiments that the proposed fusion framework results in more efficient features for predicting frame rate dependent video quality.
arXiv Detail & Related papers (2021-09-27T04:08:12Z) - Motion-blurred Video Interpolation and Extrapolation [72.3254384191509]
We present a novel framework for deblurring, interpolating and extrapolating sharp frames from a motion-blurred video in an end-to-end manner.
To ensure temporal coherence across predicted frames and address potential temporal ambiguity, we propose a simple, yet effective flow-based rule.
arXiv Detail & Related papers (2021-03-04T12:18:25Z) - Capturing Video Frame Rate Variations via Entropic Differencing [63.749184706461826]
We propose a novel statistical entropic differencing method based on a Generalized Gaussian Distribution model.
Our proposed model correlates very well with subjective scores in the recently proposed LIVE-YT-HFR database.
arXiv Detail & Related papers (2020-06-19T22:16:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.