Study of Subjective and Objective Quality Assessment of Mobile Cloud
Gaming Videos
- URL: http://arxiv.org/abs/2305.17260v1
- Date: Fri, 26 May 2023 21:08:17 GMT
- Title: Study of Subjective and Objective Quality Assessment of Mobile Cloud
Gaming Videos
- Authors: Avinab Saha, Yu-Chih Chen, Chase Davis, Bo Qiu, Xiaoming Wang, Rahul
Gowda, Ioannis Katsavounidis, Alan C. Bovik
- Abstract summary: We present the outcomes of a recent large-scale subjective study of Mobile Cloud Gaming Video Quality Assessment (MCG-VQA) on a diverse set of gaming videos.
We created a new dataset, named the LIVE-Meta Mobile Cloud Gaming (LIVE-Meta-MCG) video quality database, composed of 600 landscape and portrait gaming videos.
- Score: 34.219234345158235
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We present the outcomes of a recent large-scale subjective study of Mobile
Cloud Gaming Video Quality Assessment (MCG-VQA) on a diverse set of gaming
videos. Rapid advancements in cloud services, faster video encoding
technologies, and increased access to high-speed, low-latency wireless internet
have all contributed to the exponential growth of the Mobile Cloud Gaming
industry. Consequently, the development of methods to assess the quality of
real-time video feeds to end-users of cloud gaming platforms has become
increasingly important. However, due to the lack of a large-scale public Mobile
Cloud Gaming Video dataset containing a diverse set of distorted videos with
corresponding subjective scores, there has been limited work on the development
of MCG-VQA models. Towards accelerating progress towards these goals, we
created a new dataset, named the LIVE-Meta Mobile Cloud Gaming (LIVE-Meta-MCG)
video quality database, composed of 600 landscape and portrait gaming videos,
on which we collected 14,400 subjective quality ratings from an in-lab
subjective study. Additionally, to demonstrate the usefulness of the new
resource, we benchmarked multiple state-of-the-art VQA algorithms on the
database. The new database will be made publicly available on our website:
\url{https://live.ece.utexas.edu/research/LIVE-Meta-Mobile-Cloud-Gaming/index.html}
Related papers
- Beyond Raw Videos: Understanding Edited Videos with Large Multimodal Model [62.38322742493649]
We build a video VQA benchmark covering editing categories, i.e., effect, funny, meme, and game.
Most of the open-source video LMMs perform poorly on the benchmark, indicating a huge domain gap between edited short videos on social media and regular raw videos.
To improve the generalization ability of LMMs, we collect a training set for the proposed benchmark based on both Panda-70M/WebVid raw videos and small-scale TikTok/CapCut edited videos.
arXiv Detail & Related papers (2024-06-15T03:28:52Z) - Subjective and Objective Analysis of Indian Social Media Video Quality [31.562787181908167]
We conducted a large-scale subjective study of the perceptual quality of User-Generated Mobile Video Content on a set of mobile-originated videos from ShareChat.
The content has the benefit of culturally diversifying the existing corpus of User-Generated Content (UGC) video quality datasets.
We expect that this new data resource will also allow for the development of systems that can predict the perceived visual quality of Indian social media videos.
arXiv Detail & Related papers (2024-01-05T13:13:09Z) - GAMIVAL: Video Quality Prediction on Mobile Cloud Gaming Content [30.96557290048384]
We develop a new gaming-specific NR VQA model called the Gaming Video Quality Evaluator (GAMIVAL)
Using a support vector regression (SVR) as a regressor, GAMIVAL achieves superior performance on the new LIVE-Meta Mobile Cloud Gaming (LIVE-Meta MCG) video quality database.
arXiv Detail & Related papers (2023-05-03T20:29:04Z) - Long Video Generation with Time-Agnostic VQGAN and Time-Sensitive
Transformer [66.56167074658697]
We present a method that builds on 3D-VQGAN and transformers to generate videos with thousands of frames.
Our evaluation shows that our model trained on 16-frame video clips can generate diverse, coherent, and high-quality long videos.
We also showcase conditional extensions of our approach for generating meaningful long videos by incorporating temporal information with text and audio.
arXiv Detail & Related papers (2022-04-07T17:59:02Z) - Perceptual Quality Assessment of UGC Gaming Videos [60.68777545735441]
We have created a new VQA model specifically designed to succeed on gaming videos.
GAME-VQP successfully predicts the unique statistical characteristics of gaming videos.
It both outperforms other mainstream general VQA models as well as VQA models specifically designed for gaming videos.
arXiv Detail & Related papers (2022-03-31T22:44:26Z) - Subjective and Objective Analysis of Streamed Gaming Videos [60.32100758447269]
We study subjective and objective Video Quality Assessment (VQA) models on gaming videos.
We created a novel gaming video video resource, called the LIVE-YouTube Gaming video quality (LIVE-YT-Gaming) database, comprised of 600 real gaming videos.
We conducted a subjective human study on this data, yielding 18,600 human quality ratings recorded by 61 human subjects.
arXiv Detail & Related papers (2022-03-24T03:02:57Z) - Subjective and Objective Quality Assessment of Mobile Gaming Video [28.809404637914117]
This study presents a brand new Tencent Gaming Video dataset containing 1293 mobile gaming sequences encoded with three different codecs.
We propose an objective quality framework, namely Efficient hard-RAnk Quality Estimator (ERAQUE), that is equipped with a novel hard pairwise ranking loss.
Extensive experiments demonstrate the efficiency and robustness of our model.
arXiv Detail & Related papers (2021-01-27T19:48:15Z) - Subjective and Objective Quality Assessment of High Frame Rate Videos [60.970191379802095]
High frame rate (HFR) videos are becoming increasingly common with the tremendous popularity of live, high-action streaming content such as sports.
Live-YT-HFR dataset is comprised of 480 videos having 6 different frame rates, obtained from 16 diverse contents.
To obtain subjective labels on the videos, we conducted a human study yielding 19,000 human quality ratings obtained from a pool of 85 human subjects.
arXiv Detail & Related papers (2020-07-22T19:11:42Z) - Towards Deep Learning Methods for Quality Assessment of
Computer-Generated Imagery [2.580765958706854]
In contrast to traditional video content, gaming content has special characteristics such as extremely high motion for some games.
In this paper, we outline our plan to build a deep learningbased quality metric for video gaming quality assessment.
arXiv Detail & Related papers (2020-05-02T14:08:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.