Using Gameplay Videos for Detecting Issues in Video Games
- URL: http://arxiv.org/abs/2307.14749v1
- Date: Thu, 27 Jul 2023 10:16:04 GMT
- Title: Using Gameplay Videos for Detecting Issues in Video Games
- Authors: Emanuela Guglielmi, Simone Scalabrino, Gabriele Bavota, Rocco Oliveto
- Abstract summary: Streamers may encounter several problems (such as bugs, glitches, or performance issues) while they play.
The identified problems may negatively impact the user's gaming experience and, in turn, can harm the reputation of the game and of the producer.
We propose and empirically evaluate GELID, an approach for automatically extracting relevant information from gameplay videos.
- Score: 14.41863992598613
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Context. The game industry is increasingly growing in recent years. Every
day, millions of people play video games, not only as a hobby, but also for
professional competitions (e.g., e-sports or speed-running) or for making
business by entertaining others (e.g., streamers). The latter daily produce a
large amount of gameplay videos in which they also comment live what they
experience. But no software and, thus, no video game is perfect: Streamers may
encounter several problems (such as bugs, glitches, or performance issues)
while they play. Also, it is unlikely that they explicitly report such issues
to developers. The identified problems may negatively impact the user's gaming
experience and, in turn, can harm the reputation of the game and of the
producer. Objective. In this paper, we propose and empirically evaluate GELID,
an approach for automatically extracting relevant information from gameplay
videos by (i) identifying video segments in which streamers experienced
anomalies; (ii) categorizing them based on their type (e.g., logic or
presentation); clustering them based on (iii) the context in which appear
(e.g., level or game area) and (iv) on the specific issue type (e.g., game
crashes). Method. We manually defined a training set for step 2 of GELID
(categorization) and a test set for validating in isolation the four components
of GELID. In total, we manually segmented, labeled, and clustered 170 videos
related to 3 video games, defining a dataset containing 604 segments. Results.
While in steps 1 (segmentation) and 4 (specific issue clustering) GELID
achieves satisfactory results, it shows limitations on step 3 (game context
clustering) and, above all, step 2 (categorization).
Related papers
- VideoGameBunny: Towards vision assistants for video games [4.652236080354487]
This paper describes the development of VideoGameBunny, a LLaVA-style model based on Bunny, specifically tailored for understanding images from video games.
We release intermediate checkpoints, training logs, and an extensive dataset comprising 185,259 video game images from 413 titles.
Our experiments show that our high quality game-related data has the potential to make a relatively small model outperform the much larger state-of-the-art model LLaVa-1.6-34b.
arXiv Detail & Related papers (2024-07-21T23:31:57Z) - ViLLa: Video Reasoning Segmentation with Large Language Model [48.75470418596875]
We propose a new video segmentation task - video reasoning segmentation.
The task is designed to output tracklets of segmentation masks given a complex input text query.
We present ViLLa: Video reasoning segmentation with a Large Language Model.
arXiv Detail & Related papers (2024-07-18T17:59:17Z) - Finding the Needle in a Haystack: Detecting Bug Occurrences in Gameplay
Videos [10.127506928281413]
We present an automated approach that uses machine learning to predict whether a segment of a gameplay video contains a depiction of a bug.
We analyzed 4,412 segments of 198 gameplay videos to predict whether a segment contains an instance of a bug.
Our approach is effective at detecting segments of a video that contain bugs, achieving a high F1 score of 0.88, outperforming the current state-of-the-art technique for bug classification.
arXiv Detail & Related papers (2023-11-18T01:14:18Z) - Dense Video Captioning: A Survey of Techniques, Datasets and Evaluation
Protocols [53.706461356853445]
Untrimmed videos have interrelated events, dependencies, context, overlapping events, object-object interactions, domain specificity, and other semantics worth describing.
Video Captioning (DVC) aims at detecting and describing different events in a given video.
arXiv Detail & Related papers (2023-11-05T01:45:31Z) - GOAL: A Challenging Knowledge-grounded Video Captioning Benchmark for
Real-time Soccer Commentary Generation [75.60413443783953]
We present GOAL, a benchmark of over 8.9k soccer video clips, 22k sentences, and 42k knowledge triples for proposing a challenging new task setting as Knowledge-grounded Video Captioning (KGVC)
Our data and code are available at https://github.com/THU-KEG/goal.
arXiv Detail & Related papers (2023-03-26T08:43:36Z) - Large Language Models are Pretty Good Zero-Shot Video Game Bug Detectors [3.39487428163997]
We show that large language models can identify which event is buggy in a sequence of textual descriptions of events from a game.
Our results show promising results for employing language models to detect video game bugs.
arXiv Detail & Related papers (2022-10-05T18:44:35Z) - Subjective and Objective Analysis of Streamed Gaming Videos [60.32100758447269]
We study subjective and objective Video Quality Assessment (VQA) models on gaming videos.
We created a novel gaming video video resource, called the LIVE-YouTube Gaming video quality (LIVE-YT-Gaming) database, comprised of 600 real gaming videos.
We conducted a subjective human study on this data, yielding 18,600 human quality ratings recorded by 61 human subjects.
arXiv Detail & Related papers (2022-03-24T03:02:57Z) - CLIP meets GamePhysics: Towards bug identification in gameplay videos
using zero-shot transfer learning [4.168157981135698]
We propose a search method that accepts any English text query as input to retrieve relevant gameplay videos.
Our approach does not rely on any external information (such as video metadata)
An example application of our approach is as a gameplay video search engine to aid in reproducing video game bugs.
arXiv Detail & Related papers (2022-03-21T16:23:02Z) - CommonsenseQA 2.0: Exposing the Limits of AI through Gamification [126.85096257968414]
We construct benchmarks that test the abilities of modern natural language understanding models.
In this work, we propose gamification as a framework for data construction.
arXiv Detail & Related papers (2022-01-14T06:49:15Z) - Playable Video Generation [47.531594626822155]
We aim at allowing a user to control the generated video by selecting a discrete action at every time step as when playing a video game.
The difficulty of the task lies both in learning semantically consistent actions and in generating realistic videos conditioned on the user input.
We propose a novel framework for PVG that is trained in a self-supervised manner on a large dataset of unlabelled videos.
arXiv Detail & Related papers (2021-01-28T18:55:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.