Improved Monte Carlo Planning via Causal Disentanglement for Structurally-Decomposed Markov Decision Processes
- URL: http://arxiv.org/abs/2406.16151v2
- Date: Fri, 03 Oct 2025 14:08:38 GMT
- Title: Improved Monte Carlo Planning via Causal Disentanglement for Structurally-Decomposed Markov Decision Processes
- Authors: Larkin Liu, Shiqi Liu, Yinruo Hua, Matej Jusup,
- Abstract summary: We introduce the Structurally Decomposed MDP (SD-MDP), which leverages causal disentanglement to partition an MDP's temporal causal graph into independent components.<n>We demonstrate superior policy performance over benchmarks across various logistics and finance domains.
- Score: 0.9768138268100163
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Markov Decision Processes (MDPs), as a general-purpose framework, often overlook the benefits of incorporating the causal structure of the transition and reward dynamics. For a subclass of resource allocation problems, we introduce the Structurally Decomposed MDP (SD-MDP), which leverages causal disentanglement to partition an MDP's temporal causal graph into independent components. By exploiting this disentanglement, SD-MDP enables dimensionality reduction and computational efficiency gains in optimal value function estimation. We reduce the sequential optimization problem to a fractional knapsack problem with log-linear complexity $O(T \log T)$, outperforming traditional stochastic programming methods that exhibit polynomial complexity with respect to the time horizon $T$. Additionally, SD-MDP's computational advantages are independent of state-action space size, making it viable for high-dimensional spaces. Furthermore, our approach integrates seamlessly with Monte Carlo Tree Search (MCTS), achieving higher expected rewards under constrained simulation budgets while providing a vanishing simple regret bound. Empirical results demonstrate superior policy performance over benchmarks across various logistics and finance domains.
Related papers
- Efficient Algorithms for Robust Markov Decision Processes with $s$-Rectangular Ambiguity Sets [19.053062263045664]
We develop a unified solution framework for a class of robust MDPs with $s$-rectangular ambiguity sets.<n>Using our algorithms, we show that $s$-rectangular robust MDPs with $1$- and $2$-norm as well as $$-divergence ambiguity sets can be solved several orders of magnitude faster than with state-of-the-art commercial solvers.
arXiv Detail & Related papers (2026-02-05T12:20:26Z) - Efficient Solving of Large Single Input Superstate Decomposable Markovian Decision Process [1.17431678544333]
A key step in Bellman dynamic programming algorithms is the policy evaluation.<n>We develop an exact and efficient policy evaluation method based on this structure.<n>This yields a scalable solution applicable to both average and discounted reward MDPs.
arXiv Detail & Related papers (2025-08-01T17:49:27Z) - Sequential Monte Carlo for Policy Optimization in Continuous POMDPs [9.690099639375456]
We introduce a novel policy optimization framework for continuous partially observable Markov decision processes (POMDPs)<n>Our method casts policy learning as probabilistic inference in a non-Markovian Feynman--Kac model.<n>We demonstrate the effectiveness of our algorithm across standard continuous POMDP benchmarks.
arXiv Detail & Related papers (2025-05-22T14:45:46Z) - Learning Symbolic Persistent Macro-Actions for POMDP Solving Over Time [52.03682298194168]
This paper proposes an integration of temporal logical reasoning and Partially Observable Markov Decision Processes (POMDPs)<n>Our method leverages a fragment of Linear Temporal Logic (LTL) based on Event Calculus (EC) to generate emphpersistent (i.e., constant) macro-actions.<n>These macro-actions guide Monte Carlo Tree Search (MCTS)-based POMDP solvers over a time horizon.
arXiv Detail & Related papers (2025-05-06T16:08:55Z) - Provably Efficient RL under Episode-Wise Safety in Constrained MDPs with Linear Function Approximation [32.74649239695449]
We study the reinforcement learning problem in a constrained decision process (CMDP)<n>We propose an RL algorithm for linear CMDPs that achieves $tildemathcalO(sqrtK)$ regret with an episode-wise zero-violation guarantee.<n>Our results significantly improve upon recent linear CMDP algorithms, which either violate the constraint or incur exponential computational costs.
arXiv Detail & Related papers (2025-02-14T13:07:25Z) - Anytime Incremental $ρ$POMDP Planning in Continuous Spaces [5.767643556541711]
We present an anytime solver that dynamically refines belief representations, with formal guarantees of improvement over time.<n>We demonstrate its effectiveness for common entropy estimators, reducing computational cost by orders of magnitude.<n> Experimental results show that $rho$POMCPOW outperforms state-of-the-art solvers in both efficiency and solution quality.
arXiv Detail & Related papers (2025-02-04T18:19:40Z) - Threshold UCT: Cost-Constrained Monte Carlo Tree Search with Pareto Curves [1.799933345199395]
Constrained Markov decision processes (CMDPs) are the leading framework for safe sequential decision making under uncertainty.
We introduce Threshold UCT, an online MCTS-based algorithm for CMDP planning.
Our experiments demonstrate that our approach significantly outperforms state-of-the-art methods from the literature.
arXiv Detail & Related papers (2024-12-18T15:41:47Z) - Solving Truly Massive Budgeted Monotonic POMDPs with Oracle-Guided Meta-Reinforcement Learning [1.2774526936067927]
This paper considers the problem of solving budget-constrained multi-component monotonic POMDPs.<n>For a large number of components, solving such a POMDP using current methods is computationally intractable.<n>We introduce an oracle-guided meta-trained Proximal Policy Optimization (PPO) algorithm to solve each of the independent budget-constrained single-component monotonic POMDPs.
arXiv Detail & Related papers (2024-08-13T20:20:58Z) - Sample-efficient Learning of Infinite-horizon Average-reward MDPs with General Function Approximation [53.17668583030862]
We study infinite-horizon average-reward Markov decision processes (AMDPs) in the context of general function approximation.
We propose a novel algorithmic framework named Local-fitted Optimization with OPtimism (LOOP)
We show that LOOP achieves a sublinear $tildemathcalO(mathrmpoly(d, mathrmsp(V*)) sqrtTbeta )$ regret, where $d$ and $beta$ correspond to AGEC and log-covering number of the hypothesis class respectively
arXiv Detail & Related papers (2024-04-19T06:24:22Z) - Learning Logic Specifications for Policy Guidance in POMDPs: an
Inductive Logic Programming Approach [57.788675205519986]
We learn high-quality traces from POMDP executions generated by any solver.
We exploit data- and time-efficient Indu Logic Programming (ILP) to generate interpretable belief-based policy specifications.
We show that learneds expressed in Answer Set Programming (ASP) yield performance superior to neural networks and similar to optimal handcrafted task-specifics within lower computational time.
arXiv Detail & Related papers (2024-02-29T15:36:01Z) - Twice Regularized Markov Decision Processes: The Equivalence between
Robustness and Regularization [64.60253456266872]
Markov decision processes (MDPs) aim to handle changing or partially known system dynamics.
Regularized MDPs show more stability in policy learning without impairing time complexity.
Bellman operators enable us to derive planning and learning schemes with convergence and generalization guarantees.
arXiv Detail & Related papers (2023-03-12T13:03:28Z) - Nearly Optimal Latent State Decoding in Block MDPs [74.51224067640717]
In episodic Block MDPs, the decision maker has access to rich observations or contexts generated from a small number of latent states.
We are first interested in estimating the latent state decoding function based on data generated under a fixed behavior policy.
We then study the problem of learning near-optimal policies in the reward-free framework.
arXiv Detail & Related papers (2022-08-17T18:49:53Z) - Reinforcement Learning with a Terminator [80.34572413850186]
We learn the parameters of the TerMDP and leverage the structure of the estimation problem to provide state-wise confidence bounds.
We use these to construct a provably-efficient algorithm, which accounts for termination, and bound its regret.
arXiv Detail & Related papers (2022-05-30T18:40:28Z) - Robust Entropy-regularized Markov Decision Processes [23.719568076996662]
We study a robust version of the ER-MDP model, where the optimal policies are required to be robust.
We show that essential properties that hold for the non-robust ER-MDP and robust unregularized MDP models also hold in our settings.
We show how our framework and results can be integrated into different algorithmic schemes including value or (modified) policy.
arXiv Detail & Related papers (2021-12-31T09:50:46Z) - Neural Stochastic Dual Dynamic Programming [99.80617899593526]
We introduce a trainable neural model that learns to map problem instances to a piece-wise linear value function.
$nu$-SDDP can significantly reduce problem solving cost without sacrificing solution quality.
arXiv Detail & Related papers (2021-12-01T22:55:23Z) - Risk-Averse Decision Making Under Uncertainty [18.467950783426947]
A large class of decision making under uncertainty problems can be described via Markov decision processes (MDPs) or partially observable MDPs (POMDPs)
In this paper, we consider the problem of designing policies for MDPs and POMDPs with objectives and constraints in terms of dynamic coherent risk measures.
arXiv Detail & Related papers (2021-09-09T07:52:35Z) - Modular Deep Reinforcement Learning for Continuous Motion Planning with
Temporal Logic [59.94347858883343]
This paper investigates the motion planning of autonomous dynamical systems modeled by Markov decision processes (MDP)
The novelty is to design an embedded product MDP (EP-MDP) between the LDGBA and the MDP.
The proposed LDGBA-based reward shaping and discounting schemes for the model-free reinforcement learning (RL) only depend on the EP-MDP states.
arXiv Detail & Related papers (2021-02-24T01:11:25Z) - Identification of Unexpected Decisions in Partially Observable
Monte-Carlo Planning: a Rule-Based Approach [78.05638156687343]
We propose a methodology for analyzing POMCP policies by inspecting their traces.
The proposed method explores local properties of policy behavior to identify unexpected decisions.
We evaluate our approach on Tiger, a standard benchmark for POMDPs, and a real-world problem related to mobile robot navigation.
arXiv Detail & Related papers (2020-12-23T15:09:28Z) - Efficient semidefinite-programming-based inference for binary and
multi-class MRFs [83.09715052229782]
We propose an efficient method for computing the partition function or MAP estimate in a pairwise MRF.
We extend semidefinite relaxations from the typical binary MRF to the full multi-class setting, and develop a compact semidefinite relaxation that can again be solved efficiently using the solver.
arXiv Detail & Related papers (2020-12-04T15:36:29Z) - Stein Variational Model Predictive Control [130.60527864489168]
Decision making under uncertainty is critical to real-world, autonomous systems.
Model Predictive Control (MPC) methods have demonstrated favorable performance in practice, but remain limited when dealing with complex distributions.
We show that this framework leads to successful planning in challenging, non optimal control problems.
arXiv Detail & Related papers (2020-11-15T22:36:59Z) - Exploiting Submodular Value Functions For Scaling Up Active Perception [60.81276437097671]
In active perception tasks, agent aims to select sensory actions that reduce uncertainty about one or more hidden variables.
Partially observable Markov decision processes (POMDPs) provide a natural model for such problems.
As the number of sensors available to the agent grows, the computational cost of POMDP planning grows exponentially.
arXiv Detail & Related papers (2020-09-21T09:11:36Z) - Optimal Inspection and Maintenance Planning for Deteriorating Structural
Components through Dynamic Bayesian Networks and Markov Decision Processes [0.0]
Partially Observable Markov Decision Processes (POMDPs) provide a mathematical methodology for optimal control under uncertain action outcomes and observations.
We provide the formulation for developing both infinite and finite horizon POMDPs in a structural reliability context.
Results show that POMDPs achieve substantially lower costs as compared to their counterparts, even for traditional problem settings.
arXiv Detail & Related papers (2020-09-09T20:03:42Z) - Stochastic Finite State Control of POMDPs with LTL Specifications [14.163899014007647]
Partially observable Markov decision processes (POMDPs) provide a modeling framework for autonomous decision making under uncertainty.
This paper considers the quantitative problem of synthesizing sub-optimal finite state controllers (sFSCs) for POMDPs.
We propose a bounded policy algorithm, leading to a controlled growth in sFSC size and an any time algorithm, where the performance of the controller improves with successive iterations.
arXiv Detail & Related papers (2020-01-21T18:10:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.