A Relation Analysis of Markov Decision Process Frameworks
- URL: http://arxiv.org/abs/2008.07820v1
- Date: Tue, 18 Aug 2020 09:27:26 GMT
- Title: A Relation Analysis of Markov Decision Process Frameworks
- Authors: Tien Mai and Patrick Jaillet
- Abstract summary: We study the relation between different Decision Process (MDP) frameworks in the machine learning and econometrics literature.
We show that the entropy-regularized MDP is equivalent to a MDP model, and is strictly subsumed by the general regularized MDP.
- Score: 26.308541799686505
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the relation between different Markov Decision Process (MDP)
frameworks in the machine learning and econometrics literatures, including the
standard MDP, the entropy and general regularized MDP, and stochastic MDP,
where the latter is based on the assumption that the reward function is
stochastic and follows a given distribution. We show that the
entropy-regularized MDP is equivalent to a stochastic MDP model, and is
strictly subsumed by the general regularized MDP. Moreover, we propose a
distributional stochastic MDP framework by assuming that the distribution of
the reward function is ambiguous. We further show that the distributional
stochastic MDP is equivalent to the regularized MDP, in the sense that they
always yield the same optimal policies. We also provide a connection between
stochastic/regularized MDP and constrained MDP. Our work gives a unified view
on several important MDP frameworks, which would lead new ways to interpret the
(entropy/general) regularized MDP frameworks through the lens of stochastic
rewards and vice-versa. Given the recent popularity of regularized MDP in
(deep) reinforcement learning, our work brings new understandings of how such
algorithmic schemes work and suggest ideas to develop new ones.
Related papers
- Solving Multi-Model MDPs by Coordinate Ascent and Dynamic Programming [8.495921422521068]
Multi-model Markov decision process (MMDP) is a promising framework for computing policies.
MMDPs aim to find a policy that maximizes the expected return over a distribution of MDP models.
We propose CADP, which combines a coordinate ascent method and a dynamic programming algorithm for solving MMDPs.
arXiv Detail & Related papers (2024-07-08T18:47:59Z) - Twice Regularized Markov Decision Processes: The Equivalence between
Robustness and Regularization [64.60253456266872]
Markov decision processes (MDPs) aim to handle changing or partially known system dynamics.
Regularized MDPs show more stability in policy learning without impairing time complexity.
Bellman operators enable us to derive planning and learning schemes with convergence and generalization guarantees.
arXiv Detail & Related papers (2023-03-12T13:03:28Z) - Optimality Guarantees for Particle Belief Approximation of POMDPs [55.83001584645448]
Partially observable Markov decision processes (POMDPs) provide a flexible representation for real-world decision and control problems.
POMDPs are notoriously difficult to solve, especially when the state and observation spaces are continuous or hybrid.
We propose a theory characterizing the approximation error of the particle filtering techniques that these algorithms use.
arXiv Detail & Related papers (2022-10-10T21:11:55Z) - Twice regularized MDPs and the equivalence between robustness and
regularization [65.58188361659073]
We show that policy iteration on reward-robust MDPs can have the same time complexity as on regularized MDPs.
We generalize regularized MDPs to twice regularized MDPs.
arXiv Detail & Related papers (2021-10-12T18:33:45Z) - Safe Exploration by Solving Early Terminated MDP [77.10563395197045]
We introduce a new approach to address safe RL problems under the framework of Early TerminatedP (ET-MDP)
We first define the ET-MDP as an unconstrained algorithm with the same optimal value function as its corresponding CMDP.
An off-policy algorithm based on context models is then proposed to solve the ET-MDP, which thereby solves the corresponding CMDP with better performance and improved learning efficiency.
arXiv Detail & Related papers (2021-07-09T04:24:40Z) - Reward is enough for convex MDPs [30.478950691312715]
We study convex MDPs in which goals are expressed as convex functions of the stationary distribution.
We propose a meta-algorithm for solving this problem and show that it unifies many existing algorithms in the literature.
arXiv Detail & Related papers (2021-06-01T17:46:25Z) - Modular Deep Reinforcement Learning for Continuous Motion Planning with
Temporal Logic [59.94347858883343]
This paper investigates the motion planning of autonomous dynamical systems modeled by Markov decision processes (MDP)
The novelty is to design an embedded product MDP (EP-MDP) between the LDGBA and the MDP.
The proposed LDGBA-based reward shaping and discounting schemes for the model-free reinforcement learning (RL) only depend on the EP-MDP states.
arXiv Detail & Related papers (2021-02-24T01:11:25Z) - RL for Latent MDPs: Regret Guarantees and a Lower Bound [74.41782017817808]
We consider the regret problem for reinforcement learning in latent Markov Decision Processes (LMDP)
In an LMDP, an MDP is randomly drawn from a set of $M$ possible MDPs at the beginning of the interaction, but the identity of the chosen MDP is not revealed to the agent.
We show that the key link is a notion of separation between the MDP system dynamics.
arXiv Detail & Related papers (2021-02-09T16:49:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.