Continuous Monte Carlo Graph Search
- URL: http://arxiv.org/abs/2210.01426v3
- Date: Wed, 7 Feb 2024 15:56:30 GMT
- Title: Continuous Monte Carlo Graph Search
- Authors: Kalle Kujanp\"a\"a, Amin Babadi, Yi Zhao, Juho Kannala, Alexander
Ilin, Joni Pajarinen
- Abstract summary: Continuous Monte Carlo Graph Search ( CMCGS) is an extension of Monte Carlo Tree Search (MCTS) to online planning.
CMCGS takes advantage of the insight that, during planning, sharing the same action policy between several states can yield high performance.
It can be scaled up through parallelization, and it outperforms the Cross-Entropy Method (CEM) in continuous control with learned dynamics models.
- Score: 61.11769232283621
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Online planning is crucial for high performance in many complex sequential
decision-making tasks. Monte Carlo Tree Search (MCTS) employs a principled
mechanism for trading off exploration for exploitation for efficient online
planning, and it outperforms comparison methods in many discrete
decision-making domains such as Go, Chess, and Shogi. Subsequently, extensions
of MCTS to continuous domains have been developed. However, the inherent high
branching factor and the resulting explosion of the search tree size are
limiting the existing methods. To address this problem, we propose Continuous
Monte Carlo Graph Search (CMCGS), an extension of MCTS to online planning in
environments with continuous state and action spaces. CMCGS takes advantage of
the insight that, during planning, sharing the same action policy between
several states can yield high performance. To implement this idea, at each time
step, CMCGS clusters similar states into a limited number of stochastic action
bandit nodes, which produce a layered directed graph instead of an MCTS search
tree. Experimental evaluation shows that CMCGS outperforms comparable planning
methods in several complex continuous DeepMind Control Suite benchmarks and 2D
navigation and exploration tasks with limited sample budgets. Furthermore,
CMCGS can be scaled up through parallelization, and it outperforms the
Cross-Entropy Method (CEM) in continuous control with learned dynamics models.
Related papers
- Provably Efficient Long-Horizon Exploration in Monte Carlo Tree Search through State Occupancy Regularization [18.25487451605638]
We derive a tree search algorithm based on policy optimization with state occupancy measure regularization, which we call it Volume-MCTS
We show that count-based exploration and sampling-based motion planning can be derived as approximate solutions to this state occupancy measure regularized objective.
We test our method on several robot navigation problems, and find that Volume-MCTS outperforms AlphaZero and displays significantly better long-horizon exploration properties.
arXiv Detail & Related papers (2024-07-07T22:58:52Z) - Learning Logic Specifications for Policy Guidance in POMDPs: an
Inductive Logic Programming Approach [57.788675205519986]
We learn high-quality traces from POMDP executions generated by any solver.
We exploit data- and time-efficient Indu Logic Programming (ILP) to generate interpretable belief-based policy specifications.
We show that learneds expressed in Answer Set Programming (ASP) yield performance superior to neural networks and similar to optimal handcrafted task-specifics within lower computational time.
arXiv Detail & Related papers (2024-02-29T15:36:01Z) - Combining a Meta-Policy and Monte-Carlo Planning for Scalable Type-Based
Reasoning in Partially Observable Environments [21.548271801592907]
We propose an online Monte-Carlo Tree Search based planning method for type-based reasoning in large partially observable environments.
POTMMCP incorporates a novel meta-policy for guiding search and evaluating beliefs, allowing it to search more effectively to longer horizons.
We show that our method converges to the optimal solution in the limit and empirically demonstrate that it effectively adapts online to diverse sets of other agents.
arXiv Detail & Related papers (2023-06-09T17:43:49Z) - Learning Logic Specifications for Soft Policy Guidance in POMCP [71.69251176275638]
Partially Observable Monte Carlo Planning (POMCP) is an efficient solver for Partially Observable Markov Decision Processes (POMDPs)
POMCP suffers from sparse reward function, namely, rewards achieved only when the final goal is reached.
In this paper, we use inductive logic programming to learn logic specifications from traces of POMCP executions.
arXiv Detail & Related papers (2023-03-16T09:37:10Z) - SimCS: Simulation for Domain Incremental Online Continual Segmentation [60.18777113752866]
Existing continual learning approaches mostly focus on image classification in the class-incremental setup.
We propose SimCS, a parameter-free method complementary to existing ones that uses simulated data to regularize continual learning.
arXiv Detail & Related papers (2022-11-29T14:17:33Z) - TaSPM: Targeted Sequential Pattern Mining [53.234101208024335]
We propose a generic framework namely TaSPM, based on the fast CM-SPAM algorithm.
We also propose several pruning strategies to reduce meaningless operations in mining processes.
Experiments show that the novel targeted mining algorithm TaSPM can achieve faster running time and less memory consumption.
arXiv Detail & Related papers (2022-02-26T17:49:47Z) - Variational Combinatorial Sequential Monte Carlo Methods for Bayesian
Phylogenetic Inference [4.339931151475307]
We introduce Vari Combinatorial Monte Carlo (VCSMC), a powerful framework that establishes variational search to learn over intricate structures.
We show that VCSMC and CSMC are efficient and explore higher probability spaces than existing methods on a range of tasks.
arXiv Detail & Related papers (2021-05-31T19:44:24Z) - Scalable Anytime Planning for Multi-Agent MDPs [37.69939216970677]
We present a scalable tree search planning algorithm for large multi-agent sequential decision problems that require dynamic collaboration.
Our algorithm comprises three elements: online planning with Monte Carlo Tree Search (MCTS), factored representations of local agent interactions with coordination graphs, and the iterative Max-Plus method for joint action selection.
arXiv Detail & Related papers (2021-01-12T22:50:17Z) - Parallelization of Monte Carlo Tree Search in Continuous Domains [2.658812114255374]
Monte Carlo Tree Search (MCTS) has proven to be capable of solving challenging tasks in domains such as Go, chess and Atari.
Our work builds upon existing parallelization strategies and extends them to continuous domains.
arXiv Detail & Related papers (2020-03-30T18:43:59Z) - Decentralized MCTS via Learned Teammate Models [89.24858306636816]
We present a trainable online decentralized planning algorithm based on decentralized Monte Carlo Tree Search.
We show that deep learning and convolutional neural networks can be employed to produce accurate policy approximators.
arXiv Detail & Related papers (2020-03-19T13:10:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.