Reinforcement Learning for High-Level Strategic Control in Tower Defense Games
- URL: http://arxiv.org/abs/2406.07980v1
- Date: Wed, 12 Jun 2024 08:06:31 GMT
- Title: Reinforcement Learning for High-Level Strategic Control in Tower Defense Games
- Authors: Joakim Bergdahl, Alessandro Sestini, Linus Gisslén,
- Abstract summary: In strategy games, one of the most important aspects of game design is maintaining a sense of challenge for players.
We propose an automated approach that combines traditional scripted methods with reinforcement learning.
Results show that combining a learned approach, such as reinforcement learning, with a scripted AI produces a higher-performing and more robust agent than using only AI.
- Score: 47.618236610219554
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In strategy games, one of the most important aspects of game design is maintaining a sense of challenge for players. Many mobile titles feature quick gameplay loops that allow players to progress steadily, requiring an abundance of levels and puzzles to prevent them from reaching the end too quickly. As with any content creation, testing and validation are essential to ensure engaging gameplay mechanics, enjoyable game assets, and playable levels. In this paper, we propose an automated approach that can be leveraged for gameplay testing and validation that combines traditional scripted methods with reinforcement learning, reaping the benefits of both approaches while adapting to new situations similarly to how a human player would. We test our solution on a popular tower defense game, Plants vs. Zombies. The results show that combining a learned approach, such as reinforcement learning, with a scripted AI produces a higher-performing and more robust agent than using only heuristic AI, achieving a 57.12% success rate compared to 47.95% in a set of 40 levels. Moreover, the results demonstrate the difficulty of training a general agent for this type of puzzle-like game.
Related papers
- You Have Thirteen Hours in Which to Solve the Labyrinth: Enhancing AI Game Masters with Function Calling [35.721053667746716]
This paper presents a novel approach to enhance AI game masters by leveraging function calling in the context of the table-top role-playing game "Jim Henson's Labyrinth: The Adventure Game"
Our methodology involves integrating game-specific controls through functions, which we show improves the narrative quality and state update consistency of the AI game master.
arXiv Detail & Related papers (2024-09-11T02:03:51Z) - DanZero+: Dominating the GuanDan Game through Reinforcement Learning [95.90682269990705]
We develop an AI program for an exceptionally complex and popular card game called GuanDan.
We first put forward an AI program named DanZero for this game.
In order to further enhance the AI's capabilities, we apply policy-based reinforcement learning algorithm to GuanDan.
arXiv Detail & Related papers (2023-12-05T08:07:32Z) - Technical Challenges of Deploying Reinforcement Learning Agents for Game
Testing in AAA Games [58.720142291102135]
We describe an effort to add an experimental reinforcement learning system to an existing automated game testing solution based on scripted bots.
We show a use-case of leveraging reinforcement learning in game production and cover some of the largest time sinks anyone who wants to make the same journey for their game may encounter.
We propose a few research directions that we believe will be valuable and necessary for making machine learning, and especially reinforcement learning, an effective tool in game production.
arXiv Detail & Related papers (2023-07-19T18:19:23Z) - Diversity-based Deep Reinforcement Learning Towards Multidimensional
Difficulty for Fighting Game AI [0.9645196221785693]
We introduce a diversity-based deep reinforcement learning approach for generating a set of agents of similar difficulty.
We find this approach outperforms a baseline trained with specialized, human-authored reward functions in both diversity and performance.
arXiv Detail & Related papers (2022-11-04T21:49:52Z) - Generating Diverse and Competitive Play-Styles for Strategy Games [58.896302717975445]
We propose Portfolio Monte Carlo Tree Search with Progressive Unpruning for playing a turn-based strategy game (Tribes)
We show how it can be parameterized so a quality-diversity algorithm (MAP-Elites) is used to achieve different play-styles while keeping a competitive level of play.
Our results show that this algorithm is capable of achieving these goals even for an extensive collection of game levels beyond those used for training.
arXiv Detail & Related papers (2021-04-17T20:33:24Z) - ScrofaZero: Mastering Trick-taking Poker Game Gongzhu by Deep
Reinforcement Learning [2.7178968279054936]
We study Gongzhu, a trick-taking game analogous to, but slightly simpler than contract bridge.
We train a strong Gongzhu AI ScrofaZero from textittabula rasa by deep reinforcement learning.
We introduce new techniques for imperfect information game including stratified sampling, importance weighting, integral over equivalent class, Bayesian inference, etc.
arXiv Detail & Related papers (2021-02-15T12:01:44Z) - Learning to Play Imperfect-Information Games by Imitating an Oracle
Planner [77.67437357688316]
We consider learning to play multiplayer imperfect-information games with simultaneous moves and large state-action spaces.
Our approach is based on model-based planning.
We show that the planner is able to discover efficient playing strategies in the games of Clash Royale and Pommerman.
arXiv Detail & Related papers (2020-12-22T17:29:57Z) - Testing match-3 video games with Deep Reinforcement Learning [0.0]
We study the possibility to use the Deep Reinforcement Learning to automate the testing process in match-3 video games.
We test this kind of network on the Jelly Juice game, a match-3 video game developed by the redBit Games.
arXiv Detail & Related papers (2020-06-30T12:41:35Z) - Efficient exploration of zero-sum stochastic games [83.28949556413717]
We investigate the increasingly important and common game-solving setting where we do not have an explicit description of the game but only oracle access to it through gameplay.
During a limited-duration learning phase, the algorithm can control the actions of both players in order to try to learn the game and how to play it well.
Our motivation is to quickly learn strategies that have low exploitability in situations where evaluating the payoffs of a queried strategy profile is costly.
arXiv Detail & Related papers (2020-02-24T20:30:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.