Learning to Play Imperfect-Information Games by Imitating an Oracle
Planner
- URL: http://arxiv.org/abs/2012.12186v1
- Date: Tue, 22 Dec 2020 17:29:57 GMT
- Title: Learning to Play Imperfect-Information Games by Imitating an Oracle
Planner
- Authors: Rinu Boney, Alexander Ilin, Juho Kannala, Jarno Sepp\"anen
- Abstract summary: We consider learning to play multiplayer imperfect-information games with simultaneous moves and large state-action spaces.
Our approach is based on model-based planning.
We show that the planner is able to discover efficient playing strategies in the games of Clash Royale and Pommerman.
- Score: 77.67437357688316
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider learning to play multiplayer imperfect-information games with
simultaneous moves and large state-action spaces. Previous attempts to tackle
such challenging games have largely focused on model-free learning methods,
often requiring hundreds of years of experience to produce competitive agents.
Our approach is based on model-based planning. We tackle the problem of partial
observability by first building an (oracle) planner that has access to the full
state of the environment and then distilling the knowledge of the oracle to a
(follower) agent which is trained to play the imperfect-information game by
imitating the oracle's choices. We experimentally show that planning with naive
Monte Carlo tree search does not perform very well in large combinatorial
action spaces. We therefore propose planning with a fixed-depth tree search and
decoupled Thompson sampling for action selection. We show that the planner is
able to discover efficient playing strategies in the games of Clash Royale and
Pommerman and the follower policy successfully learns to implement them by
training on a few hundred battles.
Related papers
- Reinforcement Learning for High-Level Strategic Control in Tower Defense Games [47.618236610219554]
In strategy games, one of the most important aspects of game design is maintaining a sense of challenge for players.
We propose an automated approach that combines traditional scripted methods with reinforcement learning.
Results show that combining a learned approach, such as reinforcement learning, with a scripted AI produces a higher-performing and more robust agent than using only AI.
arXiv Detail & Related papers (2024-06-12T08:06:31Z) - All by Myself: Learning Individualized Competitive Behaviour with a
Contrastive Reinforcement Learning optimization [57.615269148301515]
In a competitive game scenario, a set of agents have to learn decisions that maximize their goals and minimize their adversaries' goals at the same time.
We propose a novel model composed of three neural layers that learn a representation of a competitive game, learn how to map the strategy of specific opponents, and how to disrupt them.
Our experiments demonstrate that our model achieves better performance when playing against offline, online, and competitive-specific models, in particular when playing against the same opponent multiple times.
arXiv Detail & Related papers (2023-10-02T08:11:07Z) - Know your Enemy: Investigating Monte-Carlo Tree Search with Opponent
Models in Pommerman [14.668309037894586]
In combination with Reinforcement Learning, Monte-Carlo Tree Search has shown to outperform human grandmasters in games such as Chess, Shogi and Go.
We investigate techniques that transform general-sum multiplayer games into single-player and two-player games.
arXiv Detail & Related papers (2023-05-22T16:39:20Z) - Solving Royal Game of Ur Using Reinforcement Learning [0.0]
We train our agents using different methods namely Monte Carlo, Qlearning and Expected Sarsa to learn optimal policy to play the strategic Royal Game of Ur.
Although it is hard to conclude that when trained with limited resources which algorithm performs better overall, but Expected Sarsa shows promising results when it comes to fastest learning.
arXiv Detail & Related papers (2022-08-23T01:26:37Z) - Generating Diverse and Competitive Play-Styles for Strategy Games [58.896302717975445]
We propose Portfolio Monte Carlo Tree Search with Progressive Unpruning for playing a turn-based strategy game (Tribes)
We show how it can be parameterized so a quality-diversity algorithm (MAP-Elites) is used to achieve different play-styles while keeping a competitive level of play.
Our results show that this algorithm is capable of achieving these goals even for an extensive collection of game levels beyond those used for training.
arXiv Detail & Related papers (2021-04-17T20:33:24Z) - Learning to Play Sequential Games versus Unknown Opponents [93.8672371143881]
We consider a repeated sequential game between a learner, who plays first, and an opponent who responds to the chosen action.
We propose a novel algorithm for the learner when playing against an adversarial sequence of opponents.
Our results include algorithm's regret guarantees that depend on the regularity of the opponent's response.
arXiv Detail & Related papers (2020-07-10T09:33:05Z) - Learning from Learners: Adapting Reinforcement Learning Agents to be
Competitive in a Card Game [71.24825724518847]
We present a study on how popular reinforcement learning algorithms can be adapted to learn and to play a real-world implementation of a competitive multiplayer card game.
We propose specific training and validation routines for the learning agents, in order to evaluate how the agents learn to be competitive and explain how they adapt to each others' playing style.
arXiv Detail & Related papers (2020-04-08T14:11:05Z) - Efficient exploration of zero-sum stochastic games [83.28949556413717]
We investigate the increasingly important and common game-solving setting where we do not have an explicit description of the game but only oracle access to it through gameplay.
During a limited-duration learning phase, the algorithm can control the actions of both players in order to try to learn the game and how to play it well.
Our motivation is to quickly learn strategies that have low exploitability in situations where evaluating the payoffs of a queried strategy profile is costly.
arXiv Detail & Related papers (2020-02-24T20:30:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.