Adaptive Warm-Start MCTS in AlphaZero-like Deep Reinforcement Learning
- URL: http://arxiv.org/abs/2105.06136v1
- Date: Thu, 13 May 2021 08:24:51 GMT
- Title: Adaptive Warm-Start MCTS in AlphaZero-like Deep Reinforcement Learning
- Authors: Hui Wang and Mike Preuss and Aske Plaat
- Abstract summary: We propose a warm-start enhancement method for Monte Carlo Tree Search.
We show that our approach works better than the fixed $Iprime$, especially for "deep," tactical, games.
We conclude that AlphaZero-like deep reinforcement learning benefits from adaptive rollout based warm-start.
- Score: 5.55810668640617
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: AlphaZero has achieved impressive performance in deep reinforcement learning
by utilizing an architecture that combines search and training of a neural
network in self-play. Many researchers are looking for ways to reproduce and
improve results for other games/tasks. However, the architecture is designed to
learn from scratch, tabula rasa, accepting a cold-start problem in self-play.
Recently, a warm-start enhancement method for Monte Carlo Tree Search was
proposed to improve the self-play starting phase. It employs a fixed parameter
$I^\prime$ to control the warm-start length. Improved performance was reported
in small board games. In this paper we present results with an adaptive switch
method. Experiments show that our approach works better than the fixed
$I^\prime$, especially for "deep," tactical, games (Othello and Connect Four).
We conjecture that the adaptive value for $I^\prime$ is also influenced by the
size of the game, and that on average $I^\prime$ will increase with game size.
We conclude that AlphaZero-like deep reinforcement learning benefits from
adaptive rollout based warm-start, as Rapid Action Value Estimate did for
rollout-based reinforcement learning 15 years ago.
Related papers
- Targeted Search Control in AlphaZero for Effective Policy Improvement [93.30151539224144]
We introduce Go-Exploit, a novel search control strategy for AlphaZero.
Go-Exploit samples the start state of its self-play trajectories from an archive of states of interest.
Go-Exploit learns with a greater sample efficiency than standard AlphaZero.
arXiv Detail & Related papers (2023-02-23T22:50:24Z) - A Ranking Game for Imitation Learning [22.028680861819215]
We treat imitation as a two-player ranking-based Stackelberg game between a $textitpolicy$ and a $textitreward$ function.
This game encompasses a large subset of both inverse reinforcement learning (IRL) methods and methods which learn from offline preferences.
We theoretically analyze the requirements of the loss function used for ranking policy performances to facilitate near-optimal imitation learning at equilibrium.
arXiv Detail & Related papers (2022-02-07T19:38:22Z) - No-Regret Learning in Time-Varying Zero-Sum Games [99.86860277006318]
Learning from repeated play in a fixed zero-sum game is a classic problem in game theory and online learning.
We develop a single parameter-free algorithm that simultaneously enjoys favorable guarantees under three performance measures.
Our algorithm is based on a two-layer structure with a meta-algorithm learning over a group of black-box base-learners satisfying a certain property.
arXiv Detail & Related papers (2022-01-30T06:10:04Z) - Chasing Sparsity in Vision Transformers: An End-to-End Exploration [127.10054032751714]
Vision transformers (ViTs) have recently received explosive popularity, but their enormous model sizes and training costs remain daunting.
This paper aims to trim down both the training memory overhead and the inference complexity, without sacrificing the achievable accuracy.
Specifically, instead of training full ViTs, we dynamically extract and train sparseworks, while sticking to a fixed small parameter budget.
arXiv Detail & Related papers (2021-06-08T17:18:00Z) - Munchausen Reinforcement Learning [50.396037940989146]
bootstrapping is a core mechanism in Reinforcement Learning (RL)
We show that slightly modifying Deep Q-Network (DQN) in that way provides an agent that is competitive with distributional methods on Atari games.
We provide strong theoretical insights on what happens under the hood -- implicit Kullback-Leibler regularization and increase of the action-gap.
arXiv Detail & Related papers (2020-07-28T18:30:23Z) - Warm-Start AlphaZero Self-Play Search Enhancements [5.096685900776467]
Recently, AlphaZero has achieved landmark results in deep reinforcement learning.
We propose a novel approach to deal with this cold-start problem by employing simple search enhancements.
Our experiments indicate that most of these enhancements improve the performance of their baseline player in three different (small) board games.
arXiv Detail & Related papers (2020-04-26T11:48:53Z) - Almost Optimal Model-Free Reinforcement Learning via Reference-Advantage
Decomposition [59.34067736545355]
We study the reinforcement learning problem in finite-horizon episodic Markov Decision Processes (MDPs) with $S$ states, $A$ actions, and episode length $H$.
We propose a model-free algorithm UCB-Advantage and prove that it achieves $tildeO(sqrtH2SAT)$ regret where $T = KH$ and $K$ is the number of episodes to play.
arXiv Detail & Related papers (2020-04-21T14:00:06Z) - Analysis of Hyper-Parameters for Small Games: Iterations or Epochs in
Self-Play? [4.534822382040738]
In self-play, Monte Carlo Tree Search is used to train a deep neural network, that is then used in tree searches.
We evaluate how these parameters contribute to training in an AlphaZero-like self-play algorithm.
We find surprising results where too much training can sometimes lead to lower performance.
arXiv Detail & Related papers (2020-03-12T19:28:48Z) - Provable Self-Play Algorithms for Competitive Reinforcement Learning [48.12602400021397]
We study self-play in competitive reinforcement learning under the setting of Markov games.
We show that a self-play algorithm achieves regret $tildemathcalO(sqrtT)$ after playing $T$ steps of the game.
We also introduce an explore-then-exploit style algorithm, which achieves a slightly worse regret $tildemathcalO(T2/3)$, but is guaranteed to run in time even in the worst case.
arXiv Detail & Related papers (2020-02-10T18:44:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.