Finding the Needle in a Haystack: Detecting Bug Occurrences in Gameplay
Videos
- URL: http://arxiv.org/abs/2311.10926v1
- Date: Sat, 18 Nov 2023 01:14:18 GMT
- Title: Finding the Needle in a Haystack: Detecting Bug Occurrences in Gameplay
Videos
- Authors: Andrew Truelove, Shiyue Rong, Eduardo Santana de Almeida, Iftekhar
Ahmed
- Abstract summary: We present an automated approach that uses machine learning to predict whether a segment of a gameplay video contains a depiction of a bug.
We analyzed 4,412 segments of 198 gameplay videos to predict whether a segment contains an instance of a bug.
Our approach is effective at detecting segments of a video that contain bugs, achieving a high F1 score of 0.88, outperforming the current state-of-the-art technique for bug classification.
- Score: 10.127506928281413
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The presence of bugs in video games can bring significant consequences for
developers. To avoid these consequences, developers can leverage gameplay
videos to identify and fix these bugs. Video hosting websites such as YouTube
provide access to millions of game videos, including videos that depict bug
occurrences, but the large amount of content can make finding bug instances
challenging. We present an automated approach that uses machine learning to
predict whether a segment of a gameplay video contains the depiction of a bug.
We analyzed 4,412 segments of 198 gameplay videos to predict whether a segment
contains an instance of a bug. Additionally, we investigated how our approach
performs when applied across different specific genres of video games and on
videos from the same game. We also analyzed the videos in the dataset to
investigate what characteristics of the visual features might explain the
classifier's prediction. Finally, we conducted a user study to examine the
benefits of our automated approach against a manual analysis. Our findings
indicate that our approach is effective at detecting segments of a video that
contain bugs, achieving a high F1 score of 0.88, outperforming the current
state-of-the-art technique for bug classification of gameplay video segments.
Related papers
- Semantic GUI Scene Learning and Video Alignment for Detecting Duplicate Video-based Bug Reports [16.45808969240553]
Video-based bug reports are increasingly being used to document bugs for programs centered around a graphical user interface (GUI)
We introduce a new approach, called JANUS, that adapts the scene-learning capabilities of vision transformers to capture subtle visual and textual patterns that manifest on app UI screens.
Janus also makes use of a video alignment technique capable of adaptive weighting of video frames to account for typical bug manifestation patterns.
arXiv Detail & Related papers (2024-07-11T15:48:36Z) - VANE-Bench: Video Anomaly Evaluation Benchmark for Conversational LMMs [64.60035916955837]
VANE-Bench is a benchmark designed to assess the proficiency of Video-LMMs in detecting anomalies and inconsistencies in videos.
Our dataset comprises an array of videos synthetically generated using existing state-of-the-art text-to-video generation models.
We evaluate nine existing Video-LMMs, both open and closed sources, on this benchmarking task and find that most of the models encounter difficulties in effectively identifying the subtle anomalies.
arXiv Detail & Related papers (2024-06-14T17:59:01Z) - Using Gameplay Videos for Detecting Issues in Video Games [14.41863992598613]
Streamers may encounter several problems (such as bugs, glitches, or performance issues) while they play.
The identified problems may negatively impact the user's gaming experience and, in turn, can harm the reputation of the game and of the producer.
We propose and empirically evaluate GELID, an approach for automatically extracting relevant information from gameplay videos.
arXiv Detail & Related papers (2023-07-27T10:16:04Z) - Video Event Extraction via Tracking Visual States of Arguments [72.54932474653444]
We propose a novel framework to detect video events by tracking the changes in the visual states of all involved arguments.
In order to capture the visual state changes of arguments, we decompose them into changes in pixels within objects, displacements of objects, and interactions among multiple arguments.
arXiv Detail & Related papers (2022-11-03T13:12:49Z) - Large Language Models are Pretty Good Zero-Shot Video Game Bug Detectors [3.39487428163997]
We show that large language models can identify which event is buggy in a sequence of textual descriptions of events from a game.
Our results show promising results for employing language models to detect video game bugs.
arXiv Detail & Related papers (2022-10-05T18:44:35Z) - CLIP meets GamePhysics: Towards bug identification in gameplay videos
using zero-shot transfer learning [4.168157981135698]
We propose a search method that accepts any English text query as input to retrieve relevant gameplay videos.
Our approach does not rely on any external information (such as video metadata)
An example application of our approach is as a gameplay video search engine to aid in reproducing video game bugs.
arXiv Detail & Related papers (2022-03-21T16:23:02Z) - Learning to Identify Perceptual Bugs in 3D Video Games [1.370633147306388]
We show that it is possible to identify a range of perceptual bugs using learning-based methods.
World of Bugs (WOB) is an open platform for testing ABD methods in 3D game environments.
arXiv Detail & Related papers (2022-02-25T18:50:11Z) - Few-Shot Learning for Video Object Detection in a Transfer-Learning
Scheme [70.45901040613015]
We study the new problem of few-shot learning for video object detection.
We employ a transfer-learning framework to effectively train the video object detector on a large number of base-class objects and a few video clips of novel-class objects.
arXiv Detail & Related papers (2021-03-26T20:37:55Z) - Less is More: ClipBERT for Video-and-Language Learning via Sparse
Sampling [98.41300980759577]
A canonical approach to video-and-language learning dictates a neural model to learn from offline-extracted dense video features.
We propose a generic framework ClipBERT that enables affordable end-to-end learning for video-and-language tasks.
Experiments on text-to-video retrieval and video question answering on six datasets demonstrate that ClipBERT outperforms existing methods.
arXiv Detail & Related papers (2021-02-11T18:50:16Z) - Playable Video Generation [47.531594626822155]
We aim at allowing a user to control the generated video by selecting a discrete action at every time step as when playing a video game.
The difficulty of the task lies both in learning semantically consistent actions and in generating realistic videos conditioned on the user input.
We propose a novel framework for PVG that is trained in a self-supervised manner on a large dataset of unlabelled videos.
arXiv Detail & Related papers (2021-01-28T18:55:58Z) - Coherent Loss: A Generic Framework for Stable Video Segmentation [103.78087255807482]
We investigate how a jittering artifact degrades the visual quality of video segmentation results.
We propose a Coherent Loss with a generic framework to enhance the performance of a neural network against jittering artifacts.
arXiv Detail & Related papers (2020-10-25T10:48:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.