GAMIVAL: Video Quality Prediction on Mobile Cloud Gaming Content
- URL: http://arxiv.org/abs/2305.02422v3
- Date: Tue, 29 Aug 2023 22:12:04 GMT
- Title: GAMIVAL: Video Quality Prediction on Mobile Cloud Gaming Content
- Authors: Yu-Chih Chen, Avinab Saha, Chase Davis, Bo Qiu, Xiaoming Wang, Rahul
Gowda, Ioannis Katsavounidis, Alan C. Bovik
- Abstract summary: We develop a new gaming-specific NR VQA model called the Gaming Video Quality Evaluator (GAMIVAL)
Using a support vector regression (SVR) as a regressor, GAMIVAL achieves superior performance on the new LIVE-Meta Mobile Cloud Gaming (LIVE-Meta MCG) video quality database.
- Score: 30.96557290048384
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The mobile cloud gaming industry has been rapidly growing over the last
decade. When streaming gaming videos are transmitted to customers' client
devices from cloud servers, algorithms that can monitor distorted video quality
without having any reference video available are desirable tools. However,
creating No-Reference Video Quality Assessment (NR VQA) models that can
accurately predict the quality of streaming gaming videos rendered by computer
graphics engines is a challenging problem, since gaming content generally
differs statistically from naturalistic videos, often lacks detail, and
contains many smooth regions. Until recently, the problem has been further
complicated by the lack of adequate subjective quality databases of mobile
gaming content. We have created a new gaming-specific NR VQA model called the
Gaming Video Quality Evaluator (GAMIVAL), which combines and leverages the
advantages of spatial and temporal gaming distorted scene statistics models, a
neural noise model, and deep semantic features. Using a support vector
regression (SVR) as a regressor, GAMIVAL achieves superior performance on the
new LIVE-Meta Mobile Cloud Gaming (LIVE-Meta MCG) video quality database.
Related papers
- VQA$^2$: Visual Question Answering for Video Quality Assessment [76.81110038738699]
Video Quality Assessment (VQA) is a classic field in low-level visual perception.
Recent studies in the image domain have demonstrated that Visual Question Answering (VQA) can enhance markedly low-level visual quality evaluation.
We introduce the VQA2 Instruction dataset - the first visual question answering instruction dataset that focuses on video quality assessment.
The VQA2 series models interleave visual and motion tokens to enhance the perception of spatial-temporal quality details in videos.
arXiv Detail & Related papers (2024-11-06T09:39:52Z) - LMM-VQA: Advancing Video Quality Assessment with Large Multimodal Models [53.64461404882853]
Video quality assessment (VQA) algorithms are needed to monitor and optimize the quality of streaming videos.
Here, we propose the first Large Multi-Modal Video Quality Assessment (LMM-VQA) model, which introduces a novel visual modeling strategy for quality-aware feature extraction.
arXiv Detail & Related papers (2024-08-26T04:29:52Z) - Study of Subjective and Objective Quality Assessment of Mobile Cloud
Gaming Videos [34.219234345158235]
We present the outcomes of a recent large-scale subjective study of Mobile Cloud Gaming Video Quality Assessment (MCG-VQA) on a diverse set of gaming videos.
We created a new dataset, named the LIVE-Meta Mobile Cloud Gaming (LIVE-Meta-MCG) video quality database, composed of 600 landscape and portrait gaming videos.
arXiv Detail & Related papers (2023-05-26T21:08:17Z) - Perceptual Quality Assessment of UGC Gaming Videos [60.68777545735441]
We have created a new VQA model specifically designed to succeed on gaming videos.
GAME-VQP successfully predicts the unique statistical characteristics of gaming videos.
It both outperforms other mainstream general VQA models as well as VQA models specifically designed for gaming videos.
arXiv Detail & Related papers (2022-03-31T22:44:26Z) - Subjective and Objective Analysis of Streamed Gaming Videos [60.32100758447269]
We study subjective and objective Video Quality Assessment (VQA) models on gaming videos.
We created a novel gaming video video resource, called the LIVE-YouTube Gaming video quality (LIVE-YT-Gaming) database, comprised of 600 real gaming videos.
We conducted a subjective human study on this data, yielding 18,600 human quality ratings recorded by 61 human subjects.
arXiv Detail & Related papers (2022-03-24T03:02:57Z) - FAVER: Blind Quality Prediction of Variable Frame Rate Videos [47.951054608064126]
Video quality assessment (VQA) remains an important and challenging problem that affects many applications at the widest scales.
We propose a first-of-a-kind blind VQA model for evaluating HFR videos, which we dub the Framerate-Aware Video Evaluator w/o Reference (FAVER)
Our experiments on several HFR video quality datasets show that FAVER outperforms other blind VQA algorithms at a reasonable computational cost.
arXiv Detail & Related papers (2022-01-05T07:54:12Z) - Patch-VQ: 'Patching Up' the Video Quality Problem [0.9786690381850356]
No-reference (NR) perceptual video quality assessment (VQA) is a complex, unsolved, and important problem to social and streaming media applications.
Current NR models are limited in their prediction capabilities on real-world, "in-the-wild" video data.
We create the largest (by far) subjective video quality dataset, containing 39, 000 realworld distorted videos and 117, 000 space-time localized video patches.
arXiv Detail & Related papers (2020-11-27T03:46:44Z) - Towards Deep Learning Methods for Quality Assessment of
Computer-Generated Imagery [2.580765958706854]
In contrast to traditional video content, gaming content has special characteristics such as extremely high motion for some games.
In this paper, we outline our plan to build a deep learningbased quality metric for video gaming quality assessment.
arXiv Detail & Related papers (2020-05-02T14:08:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.