MD-VQA: Multi-Dimensional Quality Assessment for UGC Live Videos
- URL: http://arxiv.org/abs/2303.14933v2
- Date: Wed, 19 Apr 2023 07:51:02 GMT
- Title: MD-VQA: Multi-Dimensional Quality Assessment for UGC Live Videos
- Authors: Zicheng Zhang, Wei Wu, Wei Sun, Dangyang Tu, Wei Lu, Xiongkuo Min,
Ying Chen, Guangtao Zhai
- Abstract summary: We build a first-of-a-kind subjective Live VQA database and develop an effective evaluation tool.
textbfMD-VQA achieves state-of-the-art performance on both our Live VQA database and existing compressed VQA databases.
- Score: 39.06800945430703
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: User-generated content (UGC) live videos are often bothered by various
distortions during capture procedures and thus exhibit diverse visual
qualities. Such source videos are further compressed and transcoded by media
server providers before being distributed to end-users. Because of the
flourishing of UGC live videos, effective video quality assessment (VQA) tools
are needed to monitor and perceptually optimize live streaming videos in the
distributing process. In this paper, we address \textbf{UGC Live VQA} problems
by constructing a first-of-a-kind subjective UGC Live VQA database and
developing an effective evaluation tool. Concretely, 418 source UGC videos are
collected in real live streaming scenarios and 3,762 compressed ones at
different bit rates are generated for the subsequent subjective VQA
experiments. Based on the built database, we develop a
\underline{M}ulti-\underline{D}imensional \underline{VQA} (\textbf{MD-VQA})
evaluator to measure the visual quality of UGC live videos from semantic,
distortion, and motion aspects respectively. Extensive experimental results
show that MD-VQA achieves state-of-the-art performance on both our UGC Live VQA
database and existing compressed UGC VQA databases.
Related papers
- AIM 2024 Challenge on Compressed Video Quality Assessment: Methods and Results [120.95863275142727]
This paper presents the results of the Compressed Video Quality Assessment challenge, held in conjunction with the Advances in Image Manipulation (AIM) workshop at ECCV 2024.
The challenge aimed to evaluate the performance of VQA methods on a diverse dataset of 459 videos encoded with 14 codecs of various compression standards.
arXiv Detail & Related papers (2024-08-21T20:32:45Z) - BVI-UGC: A Video Quality Database for User-Generated Content Transcoding [25.371693436870906]
We present a new video quality database, BVI-UGC, for user-generated content (UGC)
BVI-UGC contains 60 (non-pristine) reference videos and 1,080 test sequences.
We benchmarked the performance of 10 full-reference and 11 no-reference quality metrics.
arXiv Detail & Related papers (2024-08-13T19:30:12Z) - CLIPVQA:Video Quality Assessment via CLIP [56.94085651315878]
We propose an efficient CLIP-based Transformer method for the VQA problem ( CLIPVQA)
The proposed CLIPVQA achieves new state-of-the-art VQA performance and up to 37% better generalizability than existing benchmark VQA methods.
arXiv Detail & Related papers (2024-07-06T02:32:28Z) - KVQ: Kwai Video Quality Assessment for Short-form Videos [24.5291786508361]
We establish the first large-scale Kaleidoscope short Video database for Quality assessment, KVQ, which comprises 600 user-uploaded short videos and 3600 processed videos.
We propose the first short-form video quality evaluator, i.e., KSVQE, which enables the quality evaluator to identify the quality-determined semantics with the content understanding of large vision language models.
arXiv Detail & Related papers (2024-02-11T14:37:54Z) - SB-VQA: A Stack-Based Video Quality Assessment Framework for Video
Enhancement [0.40777876591043155]
We propose a stack-based framework for video quality assessment (VQA) that outperforms existing state-of-the-art methods on enhanced videos.
In addition to proposing the VQA framework for enhanced videos, we also investigate its application on professionally generated content (PGC)
Our experiments demonstrate that existing VQA algorithms can be applied to PGC videos, and we find that VQA performance for PGC videos can be improved by considering the plot of a play.
arXiv Detail & Related papers (2023-05-15T07:44:10Z) - Audio-Visual Quality Assessment for User Generated Content: Database and
Method [61.970768267688086]
Most existing VQA studies only focus on the visual distortions of videos, ignoring that the user's QoE also depends on the accompanying audio signals.
We construct the first AVQA database named the SJTU-UAV database, which includes 520 in-the-wild audio and video (A/V) sequences.
We also design a family of AVQA models, which fuse the popular VQA methods and audio features via support vector regressor (SVR)
The experimental results show that with the help of audio signals, the VQA models can evaluate the quality more accurately.
arXiv Detail & Related papers (2023-03-04T11:49:42Z) - Disentangling Aesthetic and Technical Effects for Video Quality
Assessment of User Generated Content [54.31355080688127]
The mechanisms of human quality perception in the YouTube-VQA problem is still yet to be explored.
We propose a scheme where two separate evaluators are trained with views specifically designed for each issue.
Our blind subjective studies prove that the separate evaluators in DOVER can effectively match human perception on respective disentangled quality issues.
arXiv Detail & Related papers (2022-11-09T13:55:50Z) - UGC-VQA: Benchmarking Blind Video Quality Assessment for User Generated
Content [59.13821614689478]
Blind quality prediction of in-the-wild videos is quite challenging, since the quality degradations of content are unpredictable, complicated, and often commingled.
Here we contribute to advancing the problem by conducting a comprehensive evaluation of leading VQA models.
By employing a feature selection strategy on top of leading VQA model features, we are able to extract 60 of the 763 statistical features used by the leading models.
Our experimental results show that VIDEVAL achieves state-of-theart performance at considerably lower computational cost than other leading models.
arXiv Detail & Related papers (2020-05-29T00:39:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.