A Subjective Quality Study for Video Frame Interpolation
- URL: http://arxiv.org/abs/2202.07727v2
- Date: Thu, 22 Jun 2023 12:56:35 GMT
- Title: A Subjective Quality Study for Video Frame Interpolation
- Authors: Duolikun Danier, Fan Zhang and David Bull
- Abstract summary: We describe a subjective quality study for video frame (VFI) based on a newly developed video database, BVI-VFI.
BVI-VFI contains 36 reference sequences at three different frame rates and 180 distorted videos generated using five conventional and learning based VFI algorithms.
- Score: 4.151439675744056
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Video frame interpolation (VFI) is one of the fundamental research areas in
video processing and there has been extensive research on novel and enhanced
interpolation algorithms. The same is not true for quality assessment of the
interpolated content. In this paper, we describe a subjective quality study for
VFI based on a newly developed video database, BVI-VFI. BVI-VFI contains 36
reference sequences at three different frame rates and 180 distorted videos
generated using five conventional and learning based VFI algorithms. Subjective
opinion scores have been collected from 60 human participants, and then
employed to evaluate eight popular quality metrics, including PSNR, SSIM and
LPIPS which are all commonly used for assessing VFI methods. The results
indicate that none of these metrics provide acceptable correlation with the
perceived quality on interpolated content, with the best-performing metric,
LPIPS, offering a SROCC value below 0.6. Our findings show that there is an
urgent need to develop a bespoke perceptual quality metric for VFI. The BVI-VFI
dataset is publicly available and can be accessed at
https://danier97.github.io/BVI-VFI/.
Related papers
- AIM 2024 Challenge on Compressed Video Quality Assessment: Methods and Results [120.95863275142727]
This paper presents the results of the Compressed Video Quality Assessment challenge, held in conjunction with the Advances in Image Manipulation (AIM) workshop at ECCV 2024.
The challenge aimed to evaluate the performance of VQA methods on a diverse dataset of 459 videos encoded with 14 codecs of various compression standards.
arXiv Detail & Related papers (2024-08-21T20:32:45Z) - BVI-UGC: A Video Quality Database for User-Generated Content Transcoding [25.371693436870906]
We present a new video quality database, BVI-UGC, for user-generated content (UGC)
BVI-UGC contains 60 (non-pristine) reference videos and 1,080 test sequences.
We benchmarked the performance of 10 full-reference and 11 no-reference quality metrics.
arXiv Detail & Related papers (2024-08-13T19:30:12Z) - CLIPVQA:Video Quality Assessment via CLIP [56.94085651315878]
We propose an efficient CLIP-based Transformer method for the VQA problem ( CLIPVQA)
The proposed CLIPVQA achieves new state-of-the-art VQA performance and up to 37% better generalizability than existing benchmark VQA methods.
arXiv Detail & Related papers (2024-07-06T02:32:28Z) - Towards Explainable In-the-Wild Video Quality Assessment: A Database and
a Language-Prompted Approach [52.07084862209754]
We collect over two million opinions on 4,543 in-the-wild videos on 13 dimensions of quality-related factors.
Specifically, we ask the subjects to label among a positive, a negative, and a neutral choice for each dimension.
These explanation-level opinions allow us to measure the relationships between specific quality factors and abstract subjective quality ratings.
arXiv Detail & Related papers (2023-05-22T05:20:23Z) - UATVR: Uncertainty-Adaptive Text-Video Retrieval [90.8952122146241]
A common practice is to transfer text-video pairs to the same embedding space and craft cross-modal interactions with certain entities.
We propose an Uncertainty-language Text-Video Retrieval approach, termed UATVR, which models each look-up as a distribution matching procedure.
arXiv Detail & Related papers (2023-01-16T08:43:17Z) - Video compression dataset and benchmark of learning-based video-quality
metrics [55.41644538483948]
We present a new benchmark for video-quality metrics that evaluates video compression.
It is based on a new dataset consisting of about 2,500 streams encoded using different standards.
Subjective scores were collected using crowdsourced pairwise comparisons.
arXiv Detail & Related papers (2022-11-22T09:22:28Z) - BVI-VFI: A Video Quality Database for Video Frame Interpolation [3.884484241124158]
Video frame (VFI) is a fundamental research topic in video processing.
BVI-VFI contains 540 distorted sequences generated by applying five commonly used VFI algorithms.
We benchmarked the performance of 33 classic and state-of-the-art objective image/video quality metrics on the new database.
arXiv Detail & Related papers (2022-10-03T11:15:05Z) - FloLPIPS: A Bespoke Video Quality Metric for Frame Interpoation [4.151439675744056]
We present a bespoke full reference video quality metric for VFI, FloLPIPS, that builds on the popular perceptual image quality metric, LPIPS.
FloLPIPS shows superior correlation performance with subjective ground truth over 12 popular quality assessors.
arXiv Detail & Related papers (2022-07-17T09:07:33Z) - VFHQ: A High-Quality Dataset and Benchmark for Video Face
Super-Resolution [22.236432686296233]
We develop an automatic and scalable pipeline to collect a high-quality video face dataset (VFHQ)
VFHQ contains over $16,000$ high-fidelity clips of diverse interview scenarios.
We show that the temporal information plays a pivotal role in eliminating video consistency issues.
arXiv Detail & Related papers (2022-05-06T16:31:57Z) - UGC-VQA: Benchmarking Blind Video Quality Assessment for User Generated
Content [59.13821614689478]
Blind quality prediction of in-the-wild videos is quite challenging, since the quality degradations of content are unpredictable, complicated, and often commingled.
Here we contribute to advancing the problem by conducting a comprehensive evaluation of leading VQA models.
By employing a feature selection strategy on top of leading VQA model features, we are able to extract 60 of the 763 statistical features used by the leading models.
Our experimental results show that VIDEVAL achieves state-of-theart performance at considerably lower computational cost than other leading models.
arXiv Detail & Related papers (2020-05-29T00:39:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.