Deriving and Evaluating a Detailed Taxonomy of Game Bugs
- URL: http://arxiv.org/abs/2311.16645v1
- Date: Tue, 28 Nov 2023 09:51:42 GMT
- Title: Deriving and Evaluating a Detailed Taxonomy of Game Bugs
- Authors: Nigar Azhar Butt, Salman Sherin, Muhammad Uzair Khan, Atif Aftab
Jilani, and Muhammad Zohaib Iqbal
- Abstract summary: The goal of this work is to provide a bug taxonomy for games that will help game developers in developing bug-resistant games.
We performed a Multivocal Literature Review (MLR) by analyzing 436 sources, out of which 189 (78 academic and 111 grey) sources reporting bugs encountered in the game development industry were selected for analysis.
The MLR allowed us to finalize a detailed taxonomy of 63 game bug categories in end-user perspective.
- Score: 2.2136561577994858
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Game development has become an extremely competitive multi-billion-dollar
industry. Many games fail even after years of development efforts because of
game-breaking bugs that disrupt the game-play and ruin the player experience.
The goal of this work is to provide a bug taxonomy for games that will help
game developers in developing bug-resistant games, game testers in designing
and executing fault-finding test cases, and researchers in evaluating game
testing approaches. For this purpose, we performed a Multivocal Literature
Review (MLR) by analyzing 436 sources, out of which 189 (78 academic and 111
grey) sources reporting bugs encountered in the game development industry were
selected for analysis. We validate the proposed taxonomy by conducting a survey
involving different game industry practitioners. The MLR allowed us to finalize
a detailed taxonomy of 63 game bug categories in end-user perspective including
eight first-tier categories: Gaming Balance, Implementation Response, Network,
Sound, Temporal, Unexpected Crash, Navigational, and Non-Temporal faults. We
observed that manual approaches towards game testing are still widely used.
Only one of the approaches targets sound bugs whereas game balancing and how to
incorporate machine learning in game testing is trending in the recent
literature. Most of the game testing techniques are specialized and dependent
on specific platforms.
Related papers
- Leveraging Large Language Models for Efficient Failure Analysis in Game Development [47.618236610219554]
This paper proposes a new approach to automatically identify which change in the code caused a test to fail.
The method leverages Large Language Models (LLMs) to associate error messages with the corresponding code changes causing the failure.
Our approach reaches an accuracy of 71% in our newly created dataset, which comprises issues reported by developers at EA over a period of one year.
arXiv Detail & Related papers (2024-06-11T09:21:50Z) - Finding the Needle in a Haystack: Detecting Bug Occurrences in Gameplay
Videos [10.127506928281413]
We present an automated approach that uses machine learning to predict whether a segment of a gameplay video contains a depiction of a bug.
We analyzed 4,412 segments of 198 gameplay videos to predict whether a segment contains an instance of a bug.
Our approach is effective at detecting segments of a video that contain bugs, achieving a high F1 score of 0.88, outperforming the current state-of-the-art technique for bug classification.
arXiv Detail & Related papers (2023-11-18T01:14:18Z) - Predicting Defective Visual Code Changes in a Multi-Language AAA Video
Game Project [54.20154707138088]
We focus on constructing visual code defect prediction models that encompass visual code metrics.
We test our models using features extracted from the historical agnostic of a AAA video game project.
We find that defect prediction models have better performance overall in terms of the area under the ROC curve.
arXiv Detail & Related papers (2023-09-07T00:18:43Z) - Technical Challenges of Deploying Reinforcement Learning Agents for Game
Testing in AAA Games [58.720142291102135]
We describe an effort to add an experimental reinforcement learning system to an existing automated game testing solution based on scripted bots.
We show a use-case of leveraging reinforcement learning in game production and cover some of the largest time sinks anyone who wants to make the same journey for their game may encounter.
We propose a few research directions that we believe will be valuable and necessary for making machine learning, and especially reinforcement learning, an effective tool in game production.
arXiv Detail & Related papers (2023-07-19T18:19:23Z) - Using Developer Discussions to Guide Fixing Bugs in Software [51.00904399653609]
We propose using bug report discussions, which are available before the task is performed and are also naturally occurring, avoiding the need for additional information from developers.
We demonstrate that various forms of natural language context derived from such discussions can aid bug-fixing, even leading to improved performance over using commit messages corresponding to the oracle bug-fixing commits.
arXiv Detail & Related papers (2022-11-11T16:37:33Z) - Large Language Models are Pretty Good Zero-Shot Video Game Bug Detectors [3.39487428163997]
We show that large language models can identify which event is buggy in a sequence of textual descriptions of events from a game.
Our results show promising results for employing language models to detect video game bugs.
arXiv Detail & Related papers (2022-10-05T18:44:35Z) - SUPERNOVA: Automating Test Selection and Defect Prevention in AAA Video
Games Using Risk Based Testing and Machine Learning [62.997667081978825]
Testing video games is an increasingly difficult task as traditional methods fail to scale with growing software systems.
We present SUPERNOVA, a system responsible for test selection and defect prevention while also functioning as an automation hub.
The direct impact of this has been observed to be a reduction in 55% or more testing hours for an undisclosed sports game title.
arXiv Detail & Related papers (2022-03-10T00:47:46Z) - Learning to Identify Perceptual Bugs in 3D Video Games [1.370633147306388]
We show that it is possible to identify a range of perceptual bugs using learning-based methods.
World of Bugs (WOB) is an open platform for testing ABD methods in 3D game environments.
arXiv Detail & Related papers (2022-02-25T18:50:11Z) - CommonsenseQA 2.0: Exposing the Limits of AI through Gamification [126.85096257968414]
We construct benchmarks that test the abilities of modern natural language understanding models.
In this work, we propose gamification as a framework for data construction.
arXiv Detail & Related papers (2022-01-14T06:49:15Z) - Augmenting Automated Game Testing with Deep Reinforcement Learning [0.4129225533930966]
General game testing relies on the use of human play testers, play test scripting, and prior knowledge of areas of interest to produce relevant test data.
We introduce a self-learning mechanism to the game testing framework using deep reinforcement learning (DRL)
DRL can be used to increase test coverage, find exploits, test map difficulty, and to detect common problems that arise in the testing of first-person shooter (FPS) games.
arXiv Detail & Related papers (2021-03-29T11:55:15Z) - Enhancing the Monte Carlo Tree Search Algorithm for Video Game Testing [0.0]
We extend the Monte Carlo Tree Search (MCTS) agent with several modifications for game testing purposes.
We analyze the proposed modifications in three parts: we evaluate their effects on bug finding performances of agents, we measure their success under two different computational budgets, and we assess their effects on human-likeness of the human-like agent.
arXiv Detail & Related papers (2020-03-17T16:52:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.