Learning and Planning in Average-Reward Markov Decision Processes
- URL: http://arxiv.org/abs/2006.16318v3
- Date: Mon, 28 Jun 2021 10:06:53 GMT
- Title: Learning and Planning in Average-Reward Markov Decision Processes
- Authors: Yi Wan, Abhishek Naik, Richard S. Sutton
- Abstract summary: We introduce learning and planning algorithms for average-reward MDPs.
All of our algorithms are based on using the temporal-difference error rather than the conventional error when updating the estimate of the average reward.
- Score: 15.586087060535398
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce learning and planning algorithms for average-reward MDPs,
including 1) the first general proven-convergent off-policy model-free control
algorithm without reference states, 2) the first proven-convergent off-policy
model-free prediction algorithm, and 3) the first off-policy learning algorithm
that converges to the actual value function rather than to the value function
plus an offset. All of our algorithms are based on using the
temporal-difference error rather than the conventional error when updating the
estimate of the average reward. Our proof techniques are a slight
generalization of those by Abounadi, Bertsekas, and Borkar (2001). In
experiments with an Access-Control Queuing Task, we show some of the
difficulties that can arise when using methods that rely on reference states
and argue that our new algorithms can be significantly easier to use.
Related papers
- Truly No-Regret Learning in Constrained MDPs [61.78619476991494]
We propose a model-based primal-dual algorithm to learn in an unknown CMDP.
We prove that our algorithm achieves sublinear regret without error cancellations.
arXiv Detail & Related papers (2024-02-24T09:47:46Z) - Algorithms that Approximate Data Removal: New Results and Limitations [2.6905021039717987]
We study the problem of deleting user data from machine learning models trained using empirical risk minimization.
We develop an online unlearning algorithm that is both computationally and memory efficient.
arXiv Detail & Related papers (2022-09-25T17:20:33Z) - AWD3: Dynamic Reduction of the Estimation Bias [0.0]
We introduce a technique that eliminates the estimation bias in off-policy continuous control algorithms using the experience replay mechanism.
We show through continuous control environments of OpenAI gym that our algorithm matches or outperforms the state-of-the-art off-policy policy gradient learning algorithms.
arXiv Detail & Related papers (2021-11-12T15:46:19Z) - Machine Learning for Online Algorithm Selection under Censored Feedback [71.6879432974126]
In online algorithm selection (OAS), instances of an algorithmic problem class are presented to an agent one after another, and the agent has to quickly select a presumably best algorithm from a fixed set of candidate algorithms.
For decision problems such as satisfiability (SAT), quality typically refers to the algorithm's runtime.
In this work, we revisit multi-armed bandit algorithms for OAS and discuss their capability of dealing with the problem.
We adapt them towards runtime-oriented losses, allowing for partially censored data while keeping a space- and time-complexity independent of the time horizon.
arXiv Detail & Related papers (2021-09-13T18:10:52Z) - Robust Predictable Control [149.71263296079388]
We show that our method achieves much tighter compression than prior methods, achieving up to 5x higher reward than a standard information bottleneck.
We also demonstrate that our method learns policies that are more robust and generalize better to new tasks.
arXiv Detail & Related papers (2021-09-07T17:29:34Z) - An Empirical Comparison of Off-policy Prediction Learning Algorithms on
the Collision Task [9.207173776826403]
Off-policy prediction -- learning the value function for one policy from data generated while following another policy -- is one of the most challenging subproblems in reinforcement learning.
This paper presents empirical results with eleven prominent off-policy learning algorithms that use linear function approximation.
arXiv Detail & Related papers (2021-06-02T03:45:43Z) - Evolving Reinforcement Learning Algorithms [186.62294652057062]
We propose a method for meta-learning reinforcement learning algorithms.
The learned algorithms are domain-agnostic and can generalize to new environments not seen during training.
We highlight two learned algorithms which obtain good generalization performance over other classical control tasks, gridworld type tasks, and Atari games.
arXiv Detail & Related papers (2021-01-08T18:55:07Z) - Meta-learning with Stochastic Linear Bandits [120.43000970418939]
We consider a class of bandit algorithms that implement a regularized version of the well-known OFUL algorithm, where the regularization is a square euclidean distance to a bias vector.
We show both theoretically and experimentally, that when the number of tasks grows and the variance of the task-distribution is small, our strategies have a significant advantage over learning the tasks in isolation.
arXiv Detail & Related papers (2020-05-18T08:41:39Z) - Excursion Search for Constrained Bayesian Optimization under a Limited
Budget of Failures [62.41541049302712]
We propose a novel decision maker grounded in control theory that controls the amount of risk we allow in the search as a function of a given budget of failures.
Our algorithm uses the failures budget more efficiently in a variety of optimization experiments, and generally achieves lower regret, than state-of-the-art methods.
arXiv Detail & Related papers (2020-05-15T09:54:09Z) - Q* Approximation Schemes for Batch Reinforcement Learning: A Theoretical
Comparison [17.692408242465763]
We prove performance guarantees of two algorithms for approxing $Qstar$ in batch reinforcement learning.
One of the algorithms uses a novel and explicit importance-weighting correction to overcome the infamous "double sampling" difficulty in Bellman error estimation.
arXiv Detail & Related papers (2020-03-09T05:12:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.