Does it matter how well I know what you're thinking? Opponent Modelling
in an RTS game
- URL: http://arxiv.org/abs/2006.08659v1
- Date: Mon, 15 Jun 2020 18:10:22 GMT
- Title: Does it matter how well I know what you're thinking? Opponent Modelling
in an RTS game
- Authors: James Goodman, Simon Lucas
- Abstract summary: We investigate the sensitivity of Monte Carlo Tree Search (MCTS) and a Rolling Horizon Evolutionary Algorithm (RHEA) to the accuracy of their modelling of the opponent in a simple Real-Time Strategy game.
We show that faced with an unknown opponent and a low computational budget it is better not to use any explicit model with RHEA, and to model the opponent's actions within the tree as part of the MCTS algorithm.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Opponent Modelling tries to predict the future actions of opponents, and is
required to perform well in multi-player games. There is a deep literature on
learning an opponent model, but much less on how accurate such models must be
to be useful. We investigate the sensitivity of Monte Carlo Tree Search (MCTS)
and a Rolling Horizon Evolutionary Algorithm (RHEA) to the accuracy of their
modelling of the opponent in a simple Real-Time Strategy game. We find that in
this domain RHEA is much more sensitive to the accuracy of an opponent model
than MCTS. MCTS generally does better even with an inaccurate model, while this
will degrade RHEA's performance. We show that faced with an unknown opponent
and a low computational budget it is better not to use any explicit model with
RHEA, and to model the opponent's actions within the tree as part of the MCTS
algorithm.
Related papers
- A Minimaximalist Approach to Reinforcement Learning from Human Feedback [49.45285664482369]
We present Self-Play Preference Optimization (SPO), an algorithm for reinforcement learning from human feedback.
Our approach is minimalist in that it does not require training a reward model nor unstable adversarial training.
We demonstrate that on a suite of continuous control tasks, we are able to learn significantly more efficiently than reward-model based approaches.
arXiv Detail & Related papers (2024-01-08T17:55:02Z) - Know your Enemy: Investigating Monte-Carlo Tree Search with Opponent
Models in Pommerman [14.668309037894586]
In combination with Reinforcement Learning, Monte-Carlo Tree Search has shown to outperform human grandmasters in games such as Chess, Shogi and Go.
We investigate techniques that transform general-sum multiplayer games into single-player and two-player games.
arXiv Detail & Related papers (2023-05-22T16:39:20Z) - Promptable Game Models: Text-Guided Game Simulation via Masked Diffusion
Models [68.85478477006178]
We present a Promptable Game Model (PGM) for neural video game simulators.
It allows a user to play the game by prompting it with high- and low-level action sequences.
Most captivatingly, our PGM unlocks the director's mode, where the game is played by specifying goals for the agents in the form of a prompt.
Our method significantly outperforms existing neural video game simulators in terms of rendering quality and unlocks applications beyond the capabilities of the current state of the art.
arXiv Detail & Related papers (2023-03-23T17:43:17Z) - Revisiting RCAN: Improved Training for Image Super-Resolution [94.8765153437517]
We revisit the popular RCAN model and examine the effect of different training options in SR.
We show that RCAN can outperform or match nearly all the CNN-based SR architectures published after RCAN on standard benchmarks.
arXiv Detail & Related papers (2022-01-27T02:20:11Z) - Towards Action Model Learning for Player Modeling [1.9659095632676098]
Player modeling attempts to create a computational model which accurately approximates a player's behavior in a game.
Most player modeling techniques rely on domain knowledge and are not transferable across games.
We present our findings with using action model learning (AML) to learn a player model in a domain-agnostic manner.
arXiv Detail & Related papers (2021-03-09T19:32:30Z) - Markov Cricket: Using Forward and Inverse Reinforcement Learning to
Model, Predict And Optimize Batting Performance in One-Day International
Cricket [0.8122270502556374]
We model one-day international cricket games as Markov processes, applying forward and inverse Reinforcement Learning (RL) to develop three novel tools for the game.
We show that, when used as a proxy for remaining scoring resources, this approach outperforms the state-of-the-art Duckworth-Lewis-Stern method by 3 to 10 fold.
We envisage our prediction and simulation techniques may provide a fairer alternative for estimating final scores in interrupted games, while the inferred reward model may provide useful insights for the professional game to optimize playing strategy.
arXiv Detail & Related papers (2021-03-07T13:11:16Z) - L2E: Learning to Exploit Your Opponent [66.66334543946672]
We propose a novel Learning to Exploit framework for implicit opponent modeling.
L2E acquires the ability to exploit opponents by a few interactions with different opponents during training.
We propose a novel opponent strategy generation algorithm that produces effective opponents for training automatically.
arXiv Detail & Related papers (2021-02-18T14:27:59Z) - Learning to Play Sequential Games versus Unknown Opponents [93.8672371143881]
We consider a repeated sequential game between a learner, who plays first, and an opponent who responds to the chosen action.
We propose a novel algorithm for the learner when playing against an adversarial sequence of opponents.
Our results include algorithm's regret guarantees that depend on the regularity of the opponent's response.
arXiv Detail & Related papers (2020-07-10T09:33:05Z) - Enhanced Rolling Horizon Evolution Algorithm with Opponent Model
Learning: Results for the Fighting Game AI Competition [9.75720700239984]
We propose a novel algorithm that combines Rolling Horizon Evolution Algorithm (RHEA) with opponent model learning.
Our proposed bot with the policy-gradient-based opponent model is the only one without using Monte-Carlo Tree Search (MCTS) among top five bots in the 2019 competition.
arXiv Detail & Related papers (2020-03-31T04:44:33Z) - Model-Based Reinforcement Learning for Atari [89.3039240303797]
We show how video prediction models can enable agents to solve Atari games with fewer interactions than model-free methods.
Our experiments evaluate SimPLe on a range of Atari games in low data regime of 100k interactions between the agent and the environment.
arXiv Detail & Related papers (2019-03-01T15:40:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.