Subjective and Objective Analysis of Streamed Gaming Videos
- URL: http://arxiv.org/abs/2203.12824v1
- Date: Thu, 24 Mar 2022 03:02:57 GMT
- Title: Subjective and Objective Analysis of Streamed Gaming Videos
- Authors: Xiangxu Yu, Zhenqiang Ying, Neil Birkbeck, Yilin Wang, Balu Adsumilli
and Alan C. Bovik
- Abstract summary: We study subjective and objective Video Quality Assessment (VQA) models on gaming videos.
We created a novel gaming video video resource, called the LIVE-YouTube Gaming video quality (LIVE-YT-Gaming) database, comprised of 600 real gaming videos.
We conducted a subjective human study on this data, yielding 18,600 human quality ratings recorded by 61 human subjects.
- Score: 60.32100758447269
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The rising popularity of online User-Generated-Content (UGC) in the form of
streamed and shared videos, has hastened the development of perceptual Video
Quality Assessment (VQA) models, which can be used to help optimize their
delivery. Gaming videos, which are a relatively new type of UGC videos, are
created when skilled gamers post videos of their gameplay. These kinds of
screenshots of UGC gameplay videos have become extremely popular on major
streaming platforms like YouTube and Twitch. Synthetically-generated gaming
content presents challenges to existing VQA algorithms, including those based
on natural scene/video statistics models. Synthetically generated gaming
content presents different statistical behavior than naturalistic videos. A
number of studies have been directed towards understanding the perceptual
characteristics of professionally generated gaming videos arising in gaming
video streaming, online gaming, and cloud gaming. However, little work has been
done on understanding the quality of UGC gaming videos, and how it can be
characterized and predicted. Towards boosting the progress of gaming video VQA
model development, we conducted a comprehensive study of subjective and
objective VQA models on UGC gaming videos. To do this, we created a novel UGC
gaming video resource, called the LIVE-YouTube Gaming video quality
(LIVE-YT-Gaming) database, comprised of 600 real UGC gaming videos. We
conducted a subjective human study on this data, yielding 18,600 human quality
ratings recorded by 61 human subjects. We also evaluated a number of
state-of-the-art (SOTA) VQA models on the new database, including a new one,
called GAME-VQP, based on both natural video statistics and CNN-learned
features. To help support work in this field, we are making the new
LIVE-YT-Gaming Database, publicly available through the link:
https://live.ece.utexas.edu/research/LIVE-YT-Gaming/index.html .
Related papers
- Beyond Raw Videos: Understanding Edited Videos with Large Multimodal Model [62.38322742493649]
We build a video VQA benchmark covering editing categories, i.e., effect, funny, meme, and game.
Most of the open-source video LMMs perform poorly on the benchmark, indicating a huge domain gap between edited short videos on social media and regular raw videos.
To improve the generalization ability of LMMs, we collect a training set for the proposed benchmark based on both Panda-70M/WebVid raw videos and small-scale TikTok/CapCut edited videos.
arXiv Detail & Related papers (2024-06-15T03:28:52Z) - AIS 2024 Challenge on Video Quality Assessment of User-Generated Content: Methods and Results [140.47245070508353]
This paper reviews the AIS 2024 Video Quality Assessment (VQA) Challenge, focused on User-Generated Content (UGC)
The aim of this challenge is to gather deep learning-based methods capable of estimating perceptual quality of videos.
The user-generated videos from the YouTube dataset include diverse content (sports, games, lyrics, anime, etc.), quality and resolutions.
arXiv Detail & Related papers (2024-04-24T21:02:14Z) - Study of Subjective and Objective Quality Assessment of Mobile Cloud
Gaming Videos [34.219234345158235]
We present the outcomes of a recent large-scale subjective study of Mobile Cloud Gaming Video Quality Assessment (MCG-VQA) on a diverse set of gaming videos.
We created a new dataset, named the LIVE-Meta Mobile Cloud Gaming (LIVE-Meta-MCG) video quality database, composed of 600 landscape and portrait gaming videos.
arXiv Detail & Related papers (2023-05-26T21:08:17Z) - GAMIVAL: Video Quality Prediction on Mobile Cloud Gaming Content [30.96557290048384]
We develop a new gaming-specific NR VQA model called the Gaming Video Quality Evaluator (GAMIVAL)
Using a support vector regression (SVR) as a regressor, GAMIVAL achieves superior performance on the new LIVE-Meta Mobile Cloud Gaming (LIVE-Meta MCG) video quality database.
arXiv Detail & Related papers (2023-05-03T20:29:04Z) - GOAL: A Challenging Knowledge-grounded Video Captioning Benchmark for
Real-time Soccer Commentary Generation [75.60413443783953]
We present GOAL, a benchmark of over 8.9k soccer video clips, 22k sentences, and 42k knowledge triples for proposing a challenging new task setting as Knowledge-grounded Video Captioning (KGVC)
Our data and code are available at https://github.com/THU-KEG/goal.
arXiv Detail & Related papers (2023-03-26T08:43:36Z) - Audio-Visual Quality Assessment for User Generated Content: Database and
Method [61.970768267688086]
Most existing VQA studies only focus on the visual distortions of videos, ignoring that the user's QoE also depends on the accompanying audio signals.
We construct the first AVQA database named the SJTU-UAV database, which includes 520 in-the-wild audio and video (A/V) sequences.
We also design a family of AVQA models, which fuse the popular VQA methods and audio features via support vector regressor (SVR)
The experimental results show that with the help of audio signals, the VQA models can evaluate the quality more accurately.
arXiv Detail & Related papers (2023-03-04T11:49:42Z) - Perceptual Quality Assessment of UGC Gaming Videos [60.68777545735441]
We have created a new VQA model specifically designed to succeed on gaming videos.
GAME-VQP successfully predicts the unique statistical characteristics of gaming videos.
It both outperforms other mainstream general VQA models as well as VQA models specifically designed for gaming videos.
arXiv Detail & Related papers (2022-03-31T22:44:26Z) - CLIP meets GamePhysics: Towards bug identification in gameplay videos
using zero-shot transfer learning [4.168157981135698]
We propose a search method that accepts any English text query as input to retrieve relevant gameplay videos.
Our approach does not rely on any external information (such as video metadata)
An example application of our approach is as a gameplay video search engine to aid in reproducing video game bugs.
arXiv Detail & Related papers (2022-03-21T16:23:02Z) - Towards Deep Learning Methods for Quality Assessment of
Computer-Generated Imagery [2.580765958706854]
In contrast to traditional video content, gaming content has special characteristics such as extremely high motion for some games.
In this paper, we outline our plan to build a deep learningbased quality metric for video gaming quality assessment.
arXiv Detail & Related papers (2020-05-02T14:08:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.