Recursive Reinforcement Learning
- URL: http://arxiv.org/abs/2206.11430v1
- Date: Thu, 23 Jun 2022 00:29:42 GMT
- Title: Recursive Reinforcement Learning
- Authors: Ernst Moritz Hahn, Mateo Perez, Sven Schewe, Fabio Somenzi, Ashutosh
Trivedi, Dominik Wojtczak
- Abstract summary: Recursion is the fundamental paradigm to finitely describe potentially infinite objects.
We develop RL algorithms capable of computing optimal policies in environments described as a collection of Markov decision processes.
- Score: 4.429642479975602
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recursion is the fundamental paradigm to finitely describe potentially
infinite objects. As state-of-the-art reinforcement learning (RL) algorithms
cannot directly reason about recursion, they must rely on the practitioner's
ingenuity in designing a suitable "flat" representation of the environment. The
resulting manual feature constructions and approximations are cumbersome and
error-prone; their lack of transparency hampers scalability. To overcome these
challenges, we develop RL algorithms capable of computing optimal policies in
environments described as a collection of Markov decision processes (MDPs) that
can recursively invoke one another. Each constituent MDP is characterized by
several entry and exit points that correspond to input and output values of
these invocations. These recursive MDPs (or RMDPs) are expressively equivalent
to probabilistic pushdown systems (with call-stack playing the role of the
pushdown stack), and can model probabilistic programs with recursive procedural
calls. We introduce Recursive Q-learning -- a model-free RL algorithm for RMDPs
-- and prove that it converges for finite, single-exit and deterministic
multi-exit RMDPs under mild assumptions.
Related papers
- Q-learning for Quantile MDPs: A Decomposition, Performance, and Convergence Analysis [30.713243690224207]
In Markov decision processes (MDPs), quantile risk measures such as Value-at-Risk are a standard metric for modeling RL agents' preferences for certain outcomes.
This paper proposes a new Q-learning algorithm for quantile optimization in MDPs with strong convergence and performance guarantees.
arXiv Detail & Related papers (2024-10-31T16:53:20Z) - Tractable Offline Learning of Regular Decision Processes [50.11277112628193]
This work studies offline Reinforcement Learning (RL) in a class of non-Markovian environments called Regular Decision Processes (RDPs)
Ins, the unknown dependency of future observations and rewards from the past interactions can be captured experimentally.
Many algorithms first reconstruct this unknown dependency using automata learning techniques.
arXiv Detail & Related papers (2024-09-04T14:26:58Z) - REBEL: Reinforcement Learning via Regressing Relative Rewards [59.68420022466047]
We propose REBEL, a minimalist RL algorithm for the era of generative models.
In theory, we prove that fundamental RL algorithms like Natural Policy Gradient can be seen as variants of REBEL.
We find that REBEL provides a unified approach to language modeling and image generation with stronger or similar performance as PPO and DPO.
arXiv Detail & Related papers (2024-04-25T17:20:45Z) - Transformer-Based Models Are Not Yet Perfect At Learning to Emulate
Structural Recursion [14.739369424331478]
We introduce a general framework that nicely connects the abstract concepts of structural recursion in the programming language domain to sequence modeling problems and learned models' behavior.
With our framework as a powerful conceptual tool, we identify different issues under various set-ups.
arXiv Detail & Related papers (2024-01-23T18:07:38Z) - Amortizing intractable inference in large language models [56.92471123778389]
We use amortized Bayesian inference to sample from intractable posterior distributions.
We empirically demonstrate that this distribution-matching paradigm of LLM fine-tuning can serve as an effective alternative to maximum-likelihood training.
As an important application, we interpret chain-of-thought reasoning as a latent variable modeling problem.
arXiv Detail & Related papers (2023-10-06T16:36:08Z) - Provably Efficient UCB-type Algorithms For Learning Predictive State
Representations [55.00359893021461]
The sequential decision-making problem is statistically learnable if it admits a low-rank structure modeled by predictive state representations (PSRs)
This paper proposes the first known UCB-type approach for PSRs, featuring a novel bonus term that upper bounds the total variation distance between the estimated and true models.
In contrast to existing approaches for PSRs, our UCB-type algorithms enjoy computational tractability, last-iterate guaranteed near-optimal policy, and guaranteed model accuracy.
arXiv Detail & Related papers (2023-07-01T18:35:21Z) - On Practical Robust Reinforcement Learning: Practical Uncertainty Set
and Double-Agent Algorithm [11.748284119769039]
Robust reinforcement learning (RRL) aims at seeking a robust policy to optimize the worst case performance over an uncertainty set of Markov decision processes (MDPs)
arXiv Detail & Related papers (2023-05-11T08:52:09Z) - B$^3$RTDP: A Belief Branch and Bound Real-Time Dynamic Programming
Approach to Solving POMDPs [17.956744635160568]
We propose an extension to the RTDP-Bel algorithm which we call Belief Branch and Bound RTDP (B$3$RTDP)
Our algorithm uses a bounded value function representation and takes advantage of this in two novel ways.
We empirically demonstrate that B$3$RTDP can achieve greater returns in less time than the state-of-the-art SARSOP solver on known POMDP problems.
arXiv Detail & Related papers (2022-10-22T21:42:59Z) - Multi-Objective Policy Gradients with Topological Constraints [108.10241442630289]
We present a new algorithm for a policy gradient in TMDPs by a simple extension of the proximal policy optimization (PPO) algorithm.
We demonstrate this on a real-world multiple-objective navigation problem with an arbitrary ordering of objectives both in simulation and on a real robot.
arXiv Detail & Related papers (2022-09-15T07:22:58Z) - Modular Deep Reinforcement Learning for Continuous Motion Planning with
Temporal Logic [59.94347858883343]
This paper investigates the motion planning of autonomous dynamical systems modeled by Markov decision processes (MDP)
The novelty is to design an embedded product MDP (EP-MDP) between the LDGBA and the MDP.
The proposed LDGBA-based reward shaping and discounting schemes for the model-free reinforcement learning (RL) only depend on the EP-MDP states.
arXiv Detail & Related papers (2021-02-24T01:11:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.