Representations for Stable Off-Policy Reinforcement Learning
- URL: http://arxiv.org/abs/2007.05520v2
- Date: Fri, 2 Oct 2020 20:58:51 GMT
- Title: Representations for Stable Off-Policy Reinforcement Learning
- Authors: Dibya Ghosh, Marc G. Bellemare
- Abstract summary: Reinforcement learning with function approximation can be unstable and even divergent.
We show that non-trivial state representations under which the canonical TD algorithm is stable, even when learning off-policy.
We conclude by empirically demonstrating that these stable representations can be learned using gradient descent.
- Score: 37.561660796265
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reinforcement learning with function approximation can be unstable and even
divergent, especially when combined with off-policy learning and Bellman
updates. In deep reinforcement learning, these issues have been dealt with
empirically by adapting and regularizing the representation, in particular with
auxiliary tasks. This suggests that representation learning may provide a means
to guarantee stability. In this paper, we formally show that there are indeed
nontrivial state representations under which the canonical TD algorithm is
stable, even when learning off-policy. We analyze representation learning
schemes that are based on the transition matrix of a policy, such as
proto-value functions, along three axes: approximation error, stability, and
ease of estimation. In the most general case, we show that a Schur basis
provides convergence guarantees, but is difficult to estimate from samples. For
a fixed reward function, we find that an orthogonal basis of the corresponding
Krylov subspace is an even better choice. We conclude by empirically
demonstrating that these stable representations can be learned using stochastic
gradient descent, opening the door to improved techniques for representation
learning with deep networks.
Related papers
- Statistical Inference for Temporal Difference Learning with Linear Function Approximation [62.69448336714418]
Temporal Difference (TD) learning, arguably the most widely used for policy evaluation, serves as a natural framework for this purpose.
In this paper, we study the consistency properties of TD learning with Polyak-Ruppert averaging and linear function approximation, and obtain three significant improvements over existing results.
arXiv Detail & Related papers (2024-10-21T15:34:44Z) - Stable Offline Value Function Learning with Bisimulation-based Representations [13.013000247825248]
In reinforcement learning, offline value function learning is used to estimate the expected discounted return from each state when taking actions according to a fixed target policy.
It is critical to stabilize value function learning by explicitly shaping the state-action representations.
We introduce a bisimulation-based algorithm called kernel representations for offline policy evaluation (KROPE)
arXiv Detail & Related papers (2024-10-02T15:13:25Z) - Tractable Uncertainty for Structure Learning [21.46601360284884]
We present Tractable Uncertainty for STructure, a framework for approximate posterior inference.
Probability circuits can be used as an augmented representation for structure learning methods.
arXiv Detail & Related papers (2022-04-29T15:54:39Z) - Near-optimal Offline Reinforcement Learning with Linear Representation:
Leveraging Variance Information with Pessimism [65.46524775457928]
offline reinforcement learning seeks to utilize offline/historical data to optimize sequential decision-making strategies.
We study the statistical limits of offline reinforcement learning with linear model representations.
arXiv Detail & Related papers (2022-03-11T09:00:12Z) - Towards Robust Bisimulation Metric Learning [3.42658286826597]
Bisimulation metrics offer one solution to representation learning problem.
We generalize value function approximation bounds for on-policy bisimulation metrics to non-optimal policies.
We find that these issues stem from an underconstrained dynamics model and an unstable dependence of the embedding norm on the reward signal.
arXiv Detail & Related papers (2021-10-27T00:32:07Z) - A Boosting Approach to Reinforcement Learning [59.46285581748018]
We study efficient algorithms for reinforcement learning in decision processes whose complexity is independent of the number of states.
We give an efficient algorithm that is capable of improving the accuracy of such weak learning methods.
arXiv Detail & Related papers (2021-08-22T16:00:45Z) - A Distributional Analysis of Sampling-Based Reinforcement Learning
Algorithms [67.67377846416106]
We present a distributional approach to theoretical analyses of reinforcement learning algorithms for constant step-sizes.
We show that value-based methods such as TD($lambda$) and $Q$-Learning have update rules which are contractive in the space of distributions of functions.
arXiv Detail & Related papers (2020-03-27T05:13:29Z) - Scalable Uncertainty for Computer Vision with Functional Variational
Inference [18.492485304537134]
We leverage the formulation of variational inference in function space.
We obtain predictive uncertainty estimates at the cost of a single forward pass through any chosen CNN architecture.
We propose numerically efficient algorithms which enable fast training in the context of high-dimensional tasks.
arXiv Detail & Related papers (2020-03-06T19:09:42Z) - Distributional Robustness and Regularization in Reinforcement Learning [62.23012916708608]
We introduce a new regularizer for empirical value functions and show that it lower bounds the Wasserstein distributionally robust value function.
It suggests using regularization as a practical tool for dealing with $textitexternal uncertainty$ in reinforcement learning.
arXiv Detail & Related papers (2020-03-05T19:56:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.