Perceptual Quality Assessment of UGC Gaming Videos
- URL: http://arxiv.org/abs/2204.00128v1
- Date: Thu, 31 Mar 2022 22:44:26 GMT
- Title: Perceptual Quality Assessment of UGC Gaming Videos
- Authors: Xiangxu Yu, Zhengzhong Tu, Neil Birkbeck, Yilin Wang, Balu Adsumilli
and Alan C. Bovik
- Abstract summary: We have created a new VQA model specifically designed to succeed on gaming videos.
GAME-VQP successfully predicts the unique statistical characteristics of gaming videos.
It both outperforms other mainstream general VQA models as well as VQA models specifically designed for gaming videos.
- Score: 60.68777545735441
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, with the vigorous development of the video game industry,
the proportion of gaming videos on major video websites like YouTube has
dramatically increased. However, relatively little research has been done on
the automatic quality prediction of gaming videos, especially on those that
fall in the category of "User-Generated-Content" (UGC). Since current leading
general-purpose Video Quality Assessment (VQA) models do not perform well on
this type of gaming videos, we have created a new VQA model specifically
designed to succeed on UGC gaming videos, which we call the Gaming Video
Quality Predictor (GAME-VQP). GAME-VQP successfully predicts the unique
statistical characteristics of gaming videos by drawing upon features designed
under modified natural scene statistics models, combined with gaming specific
features learned by a Convolution Neural Network. We study the performance of
GAME-VQP on a very recent large UGC gaming video database called
LIVE-YT-Gaming, and find that it both outperforms other mainstream general VQA
models as well as VQA models specifically designed for gaming videos. The new
model will be made public after paper being accepted.
Related papers
- Beyond Raw Videos: Understanding Edited Videos with Large Multimodal Model [62.38322742493649]
We build a video VQA benchmark covering editing categories, i.e., effect, funny, meme, and game.
Most of the open-source video LMMs perform poorly on the benchmark, indicating a huge domain gap between edited short videos on social media and regular raw videos.
To improve the generalization ability of LMMs, we collect a training set for the proposed benchmark based on both Panda-70M/WebVid raw videos and small-scale TikTok/CapCut edited videos.
arXiv Detail & Related papers (2024-06-15T03:28:52Z) - FunQA: Towards Surprising Video Comprehension [64.58663825184958]
We introduce FunQA, a challenging video question-answering dataset.
FunQA covers three previously unexplored types of surprising videos: HumorQA, CreativeQA, and MagicQA.
In total, the FunQA benchmark consists of 312K free-text QA pairs derived from 4.3K video clips.
arXiv Detail & Related papers (2023-06-26T17:59:55Z) - StarVQA+: Co-training Space-Time Attention for Video Quality Assessment [56.548364244708715]
Self-attention based Transformer has achieved great success in many computer vision tasks.
However, its application to video quality assessment (VQA) has not been satisfactory so far.
This paper presents a co-trained Space-Time Attention network for the VQA problem, termed StarVQA+.
arXiv Detail & Related papers (2023-06-21T14:27:31Z) - Study of Subjective and Objective Quality Assessment of Mobile Cloud
Gaming Videos [34.219234345158235]
We present the outcomes of a recent large-scale subjective study of Mobile Cloud Gaming Video Quality Assessment (MCG-VQA) on a diverse set of gaming videos.
We created a new dataset, named the LIVE-Meta Mobile Cloud Gaming (LIVE-Meta-MCG) video quality database, composed of 600 landscape and portrait gaming videos.
arXiv Detail & Related papers (2023-05-26T21:08:17Z) - GAMIVAL: Video Quality Prediction on Mobile Cloud Gaming Content [30.96557290048384]
We develop a new gaming-specific NR VQA model called the Gaming Video Quality Evaluator (GAMIVAL)
Using a support vector regression (SVR) as a regressor, GAMIVAL achieves superior performance on the new LIVE-Meta Mobile Cloud Gaming (LIVE-Meta MCG) video quality database.
arXiv Detail & Related papers (2023-05-03T20:29:04Z) - Subjective and Objective Analysis of Streamed Gaming Videos [60.32100758447269]
We study subjective and objective Video Quality Assessment (VQA) models on gaming videos.
We created a novel gaming video video resource, called the LIVE-YouTube Gaming video quality (LIVE-YT-Gaming) database, comprised of 600 real gaming videos.
We conducted a subjective human study on this data, yielding 18,600 human quality ratings recorded by 61 human subjects.
arXiv Detail & Related papers (2022-03-24T03:02:57Z) - UGC-VQA: Benchmarking Blind Video Quality Assessment for User Generated
Content [59.13821614689478]
Blind quality prediction of in-the-wild videos is quite challenging, since the quality degradations of content are unpredictable, complicated, and often commingled.
Here we contribute to advancing the problem by conducting a comprehensive evaluation of leading VQA models.
By employing a feature selection strategy on top of leading VQA model features, we are able to extract 60 of the 763 statistical features used by the leading models.
Our experimental results show that VIDEVAL achieves state-of-theart performance at considerably lower computational cost than other leading models.
arXiv Detail & Related papers (2020-05-29T00:39:20Z) - Towards Deep Learning Methods for Quality Assessment of
Computer-Generated Imagery [2.580765958706854]
In contrast to traditional video content, gaming content has special characteristics such as extremely high motion for some games.
In this paper, we outline our plan to build a deep learningbased quality metric for video gaming quality assessment.
arXiv Detail & Related papers (2020-05-02T14:08:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.