Enhancements for Real-Time Monte-Carlo Tree Search in General Video Game Playing
- URL: http://arxiv.org/abs/2407.03049v1
- Date: Wed, 3 Jul 2024 12:18:28 GMT
- Title: Enhancements for Real-Time Monte-Carlo Tree Search in General Video Game Playing
- Authors: Dennis J. N. J. Soemers, Chiara F. Sironi, Torsten Schuster, Mark H. M. Winands,
- Abstract summary: This paper discusses eight enhancements for Monte-Carlo Tree Search (MCTS) in General Video Game Playing (GVGP)
Some of these are known from existing literature, and are either extended or introduced in the context of GVGP.
Most enhancements are shown to provide statistically significant increases in win percentages when applied individually.
- Score: 1.2882480196517305
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: General Video Game Playing (GVGP) is a field of Artificial Intelligence where agents play a variety of real-time video games that are unknown in advance. This limits the use of domain-specific heuristics. Monte-Carlo Tree Search (MCTS) is a search technique for game playing that does not rely on domain-specific knowledge. This paper discusses eight enhancements for MCTS in GVGP; Progressive History, N-Gram Selection Technique, Tree Reuse, Breadth-First Tree Initialization, Loss Avoidance, Novelty-Based Pruning, Knowledge-Based Evaluations, and Deterministic Game Detection. Some of these are known from existing literature, and are either extended or introduced in the context of GVGP, and some are novel enhancements for MCTS. Most enhancements are shown to provide statistically significant increases in win percentages when applied individually. When combined, they increase the average win percentage over sixty different games from 31.0% to 48.4% in comparison to a vanilla MCTS implementation, approaching a level that is competitive with the best agents of the GVG-AI competition in 2015.
Related papers
- Monte-Carlo Tree Search for Multi-Agent Pathfinding: Preliminary Results [60.4817465598352]
We introduce an original variant of Monte-Carlo Tree Search (MCTS) tailored to multi-agent pathfinding.
Specifically, we use individual paths to assist the agents with the the goal-reaching behavior.
We also use a dedicated decomposition technique to reduce the branching factor of the tree search procedure.
arXiv Detail & Related papers (2023-07-25T12:33:53Z) - SPRING: Studying the Paper and Reasoning to Play Games [102.5587155284795]
We propose a novel approach, SPRING, to read the game's original academic paper and use the knowledge learned to reason and play the game through a large language model (LLM)
In experiments, we study the quality of in-context "reasoning" induced by different forms of prompts under the setting of the Crafter open-world environment.
Our experiments suggest that LLMs, when prompted with consistent chain-of-thought, have great potential in completing sophisticated high-level trajectories.
arXiv Detail & Related papers (2023-05-24T18:14:35Z) - Proof Number Based Monte-Carlo Tree Search [1.93674821880689]
This paper proposes a new game-search algorithm, PN-MCTS, which combines Monte-Carlo Tree Search (MCTS) and Proof-Number Search (PNS)
We define three areas where the additional knowledge provided by the proof and disproof numbers gathered in MCTS trees might be used.
Experiments show that PN-MCTS is able to outperform MCTS in all tested game domains, achieving win rates up to 96.2% for Lines of Action.
arXiv Detail & Related papers (2023-03-16T16:27:07Z) - Optimisation of MCTS Player for The Lord of the Rings: The Card Game [0.0]
The article presents research on the use of Monte-Carlo Tree Search (MCTS) methods to create an artificial player for the popular card game "The Lord of the Rings"
arXiv Detail & Related papers (2021-09-24T14:42:32Z) - MCTS Based Agents for Multistage Single-Player Card Game [0.0]
The article presents the use of Monte Carlo Tree Search algorithms for the card game Lord of the Rings.
The main challenge was the complexity of the game mechanics, in which each round consists of 5 decision stages and 2 random stages.
To test various decision-making algorithms, a game simulator has been implemented.
arXiv Detail & Related papers (2021-09-24T10:56:54Z) - Generating Diverse and Competitive Play-Styles for Strategy Games [58.896302717975445]
We propose Portfolio Monte Carlo Tree Search with Progressive Unpruning for playing a turn-based strategy game (Tribes)
We show how it can be parameterized so a quality-diversity algorithm (MAP-Elites) is used to achieve different play-styles while keeping a competitive level of play.
Our results show that this algorithm is capable of achieving these goals even for an extensive collection of game levels beyond those used for training.
arXiv Detail & Related papers (2021-04-17T20:33:24Z) - An Empirical Study on the Generalization Power of Neural Representations
Learned via Visual Guessing Games [79.23847247132345]
This work investigates how well an artificial agent can benefit from playing guessing games when later asked to perform on novel NLP downstream tasks such as Visual Question Answering (VQA)
We propose two ways to exploit playing guessing games: 1) a supervised learning scenario in which the agent learns to mimic successful guessing games and 2) a novel way for an agent to play by itself, called Self-play via Iterated Experience Learning (SPIEL)
arXiv Detail & Related papers (2021-01-31T10:30:48Z) - Learning to Stop: Dynamic Simulation Monte-Carlo Tree Search [66.34387649910046]
Monte Carlo tree search (MCTS) has achieved state-of-the-art results in many domains such as Go and Atari games.
We propose to achieve this goal by predicting the uncertainty of the current searching status and use the result to decide whether we should stop searching.
arXiv Detail & Related papers (2020-12-14T19:49:25Z) - Playing Carcassonne with Monte Carlo Tree Search [0.0]
We explore the use of the vanilla Monte Carlo Tree Search (MCTS) and the Rapid Action Value Estimation (MCTS-RAVE) in the game of Carcassonne.
We compare the strengths of the MCTS-based methods with the Star2.5 algorithm, previously reported to yield competitive results in the game of Carcassonne.
arXiv Detail & Related papers (2020-09-27T22:35:53Z) - Single-Agent Optimization Through Policy Iteration Using Monte-Carlo
Tree Search [8.22379888383833]
Combination of Monte-Carlo Tree Search (MCTS) and deep reinforcement learning is state-of-the-art in two-player perfect-information games.
We describe a search algorithm that uses a variant of MCTS which we enhanced by 1) a novel action value normalization mechanism for games with potentially unbounded rewards, 2) defining a virtual loss function that enables effective search parallelization, and 3) a policy network, trained by generations of self-play, to guide the search.
arXiv Detail & Related papers (2020-05-22T18:02:36Z) - Learning Dynamic Belief Graphs to Generalize on Text-Based Games [55.59741414135887]
Playing text-based games requires skills in processing natural language and sequential decision making.
In this work, we investigate how an agent can plan and generalize in text-based games using graph-structured representations learned end-to-end from raw text.
arXiv Detail & Related papers (2020-02-21T04:38:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.