Enhancing the Monte Carlo Tree Search Algorithm for Video Game Testing
- URL: http://arxiv.org/abs/2003.07813v1
- Date: Tue, 17 Mar 2020 16:52:53 GMT
- Title: Enhancing the Monte Carlo Tree Search Algorithm for Video Game Testing
- Authors: Sinan Ariyurek, Aysu Betin-Can, Elif Surer
- Abstract summary: We extend the Monte Carlo Tree Search (MCTS) agent with several modifications for game testing purposes.
We analyze the proposed modifications in three parts: we evaluate their effects on bug finding performances of agents, we measure their success under two different computational budgets, and we assess their effects on human-likeness of the human-like agent.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we study the effects of several Monte Carlo Tree Search (MCTS)
modifications for video game testing. Although MCTS modifications are highly
studied in game playing, their impacts on finding bugs are blank. We focused on
bug finding in our previous study where we introduced synthetic and human-like
test goals and we used these test goals in Sarsa and MCTS agents to find bugs.
In this study, we extend the MCTS agent with several modifications for game
testing purposes. Furthermore, we present a novel tree reuse strategy. We
experiment with these modifications by testing them on three testbed games,
four levels each, that contain 45 bugs in total. We use the General Video Game
Artificial Intelligence (GVG-AI) framework to create the testbed games and
collect 427 human tester trajectories using the GVG-AI framework. We analyze
the proposed modifications in three parts: we evaluate their effects on bug
finding performances of agents, we measure their success under two different
computational budgets, and we assess their effects on human-likeness of the
human-like agent. Our results show that MCTS modifications improve the bug
finding performance of the agents.
Related papers
- Towards a Characterisation of Monte-Carlo Tree Search Performance in Different Games [1.1567513466696948]
This paper describes work on an initial dataset that we have built to make progress towards such an understanding.
We describe a preliminary analysis and work on training predictive models on this dataset, as well as lessons learned and future plans for a new and improved version of the dataset.
arXiv Detail & Related papers (2024-06-13T15:46:27Z) - Reinforcement Learning for High-Level Strategic Control in Tower Defense Games [47.618236610219554]
In strategy games, one of the most important aspects of game design is maintaining a sense of challenge for players.
We propose an automated approach that combines traditional scripted methods with reinforcement learning.
Results show that combining a learned approach, such as reinforcement learning, with a scripted AI produces a higher-performing and more robust agent than using only AI.
arXiv Detail & Related papers (2024-06-12T08:06:31Z) - Impact of Decentralized Learning on Player Utilities in Stackelberg Games [57.08270857260131]
In many two-agent systems, each agent learns separately and the rewards of the two agents are not perfectly aligned.
We model these systems as Stackelberg games with decentralized learning and show that standard regret benchmarks result in worst-case linear regret for at least one player.
We develop algorithms to achieve near-optimal $O(T2/3)$ regret for both players with respect to these benchmarks.
arXiv Detail & Related papers (2024-02-29T23:38:28Z) - Deriving and Evaluating a Detailed Taxonomy of Game Bugs [2.2136561577994858]
The goal of this work is to provide a bug taxonomy for games that will help game developers in developing bug-resistant games.
We performed a Multivocal Literature Review (MLR) by analyzing 436 sources, out of which 189 (78 academic and 111 grey) sources reporting bugs encountered in the game development industry were selected for analysis.
The MLR allowed us to finalize a detailed taxonomy of 63 game bug categories in end-user perspective.
arXiv Detail & Related papers (2023-11-28T09:51:42Z) - SUPERNOVA: Automating Test Selection and Defect Prevention in AAA Video
Games Using Risk Based Testing and Machine Learning [62.997667081978825]
Testing video games is an increasingly difficult task as traditional methods fail to scale with growing software systems.
We present SUPERNOVA, a system responsible for test selection and defect prevention while also functioning as an automation hub.
The direct impact of this has been observed to be a reduction in 55% or more testing hours for an undisclosed sports game title.
arXiv Detail & Related papers (2022-03-10T00:47:46Z) - No-Regret Learning in Time-Varying Zero-Sum Games [99.86860277006318]
Learning from repeated play in a fixed zero-sum game is a classic problem in game theory and online learning.
We develop a single parameter-free algorithm that simultaneously enjoys favorable guarantees under three performance measures.
Our algorithm is based on a two-layer structure with a meta-algorithm learning over a group of black-box base-learners satisfying a certain property.
arXiv Detail & Related papers (2022-01-30T06:10:04Z) - CommonsenseQA 2.0: Exposing the Limits of AI through Gamification [126.85096257968414]
We construct benchmarks that test the abilities of modern natural language understanding models.
In this work, we propose gamification as a framework for data construction.
arXiv Detail & Related papers (2022-01-14T06:49:15Z) - Augmenting Automated Game Testing with Deep Reinforcement Learning [0.4129225533930966]
General game testing relies on the use of human play testers, play test scripting, and prior knowledge of areas of interest to produce relevant test data.
We introduce a self-learning mechanism to the game testing framework using deep reinforcement learning (DRL)
DRL can be used to increase test coverage, find exploits, test map difficulty, and to detect common problems that arise in the testing of first-person shooter (FPS) games.
arXiv Detail & Related papers (2021-03-29T11:55:15Z) - Deep Policy Networks for NPC Behaviors that Adapt to Changing Design
Parameters in Roguelike Games [137.86426963572214]
Turn-based strategy games like Roguelikes, for example, present unique challenges to Deep Reinforcement Learning (DRL)
We propose two network architectures to better handle complex categorical state spaces and to mitigate the need for retraining forced by design decisions.
arXiv Detail & Related papers (2020-12-07T08:47:25Z) - Monte Carlo Tree Search for a single target search game on a 2-D lattice [0.0]
This project imagines a game in which an AI player searches for a stationary target within a 2-D lattice.
We analyze its behavior with different target distributions and compare its efficiency to the Levy Flight Search, a model for animal foraging behavior.
arXiv Detail & Related papers (2020-11-29T01:07:45Z) - Griddly: A platform for AI research in games [0.0]
We present Griddly as a new platform for Game AI research.
Griddly provides a unique combination of highly customizable games, different observer types and an efficient C++ core engine.
We present a series of baseline experiments to study the effect of different observation configurations and generalization ability of RL agents.
arXiv Detail & Related papers (2020-11-12T13:23:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.