Strategies for Using Proximal Policy Optimization in Mobile Puzzle Games
- URL: http://arxiv.org/abs/2007.01542v1
- Date: Fri, 3 Jul 2020 08:03:45 GMT
- Title: Strategies for Using Proximal Policy Optimization in Mobile Puzzle Games
- Authors: Jeppe Theiss Kristensen, Paolo Burelli
- Abstract summary: This research work is investigating and evaluating strategies to apply the popular RL method Proximal Policy Optimization (PPO) in a casual mobile puzzle game.
We have implemented and tested a number of different strategies against a real-world mobile puzzle game.
We identified a few strategies to ensure a more stable behaviour of the algorithm in this game genre.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While traditionally a labour intensive task, the testing of game content is
progressively becoming more automated. Among the many directions in which this
automation is taking shape, automatic play-testing is one of the most promising
thanks also to advancements of many supervised and reinforcement learning (RL)
algorithms. However these type of algorithms, while extremely powerful, often
suffer in production environments due to issues with reliability and
transparency in their training and usage.
In this research work we are investigating and evaluating strategies to apply
the popular RL method Proximal Policy Optimization (PPO) in a casual mobile
puzzle game with a specific focus on improving its reliability in training and
generalization during game playing.
We have implemented and tested a number of different strategies against a
real-world mobile puzzle game (Lily's Garden from Tactile Games). We isolated
the conditions that lead to a failure in either training or generalization
during testing and we identified a few strategies to ensure a more stable
behaviour of the algorithm in this game genre.
Related papers
- Reinforcement Learning for High-Level Strategic Control in Tower Defense Games [47.618236610219554]
In strategy games, one of the most important aspects of game design is maintaining a sense of challenge for players.
We propose an automated approach that combines traditional scripted methods with reinforcement learning.
Results show that combining a learned approach, such as reinforcement learning, with a scripted AI produces a higher-performing and more robust agent than using only AI.
arXiv Detail & Related papers (2024-06-12T08:06:31Z) - Two-Step Reinforcement Learning for Multistage Strategy Card Game [0.0]
This study introduces a two-step reinforcement learning (RL) strategy tailored for "The Lord of the Rings: The Card Game (LOTRCG)"
This research diverges from conventional RL methods by adopting a phased learning approach.
The paper also explores a multi-agent system, where distinct RL agents are employed for various decision-making aspects of the game.
arXiv Detail & Related papers (2023-11-29T01:31:21Z) - Finding mixed-strategy equilibria of continuous-action games without
gradients using randomized policy networks [83.28949556413717]
We study the problem of computing an approximate Nash equilibrium of continuous-action game without access to gradients.
We model players' strategies using artificial neural networks.
This paper is the first to solve general continuous-action games with unrestricted mixed strategies and without any gradient information.
arXiv Detail & Related papers (2022-11-29T05:16:41Z) - Game Theoretic Rating in N-player general-sum games with Equilibria [26.166859475522106]
We propose novel algorithms suitable for N-player, general-sum rating of strategies in normal-form games according to the payoff rating system.
This enables well-established solution concepts, such as equilibria, to be leveraged to efficiently rate strategies in games with complex strategic interactions.
arXiv Detail & Related papers (2022-10-05T12:33:03Z) - Portfolio Search and Optimization for General Strategy Game-Playing [58.896302717975445]
We propose a new algorithm for optimization and action-selection based on the Rolling Horizon Evolutionary Algorithm.
For the optimization of the agents' parameters and portfolio sets we study the use of the N-tuple Bandit Evolutionary Algorithm.
An analysis of the agents' performance shows that the proposed algorithm generalizes well to all game-modes and is able to outperform other portfolio methods.
arXiv Detail & Related papers (2021-04-21T09:28:28Z) - Generating Diverse and Competitive Play-Styles for Strategy Games [58.896302717975445]
We propose Portfolio Monte Carlo Tree Search with Progressive Unpruning for playing a turn-based strategy game (Tribes)
We show how it can be parameterized so a quality-diversity algorithm (MAP-Elites) is used to achieve different play-styles while keeping a competitive level of play.
Our results show that this algorithm is capable of achieving these goals even for an extensive collection of game levels beyond those used for training.
arXiv Detail & Related papers (2021-04-17T20:33:24Z) - Learning from Learners: Adapting Reinforcement Learning Agents to be
Competitive in a Card Game [71.24825724518847]
We present a study on how popular reinforcement learning algorithms can be adapted to learn and to play a real-world implementation of a competitive multiplayer card game.
We propose specific training and validation routines for the learning agents, in order to evaluate how the agents learn to be competitive and explain how they adapt to each others' playing style.
arXiv Detail & Related papers (2020-04-08T14:11:05Z) - Efficient exploration of zero-sum stochastic games [83.28949556413717]
We investigate the increasingly important and common game-solving setting where we do not have an explicit description of the game but only oracle access to it through gameplay.
During a limited-duration learning phase, the algorithm can control the actions of both players in order to try to learn the game and how to play it well.
Our motivation is to quickly learn strategies that have low exploitability in situations where evaluating the payoffs of a queried strategy profile is costly.
arXiv Detail & Related papers (2020-02-24T20:30:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.