Deciding What to Model: Value-Equivalent Sampling for Reinforcement
Learning
- URL: http://arxiv.org/abs/2206.02072v1
- Date: Sat, 4 Jun 2022 23:36:38 GMT
- Title: Deciding What to Model: Value-Equivalent Sampling for Reinforcement
Learning
- Authors: Dilip Arumugam and Benjamin Van Roy
- Abstract summary: We introduce an algorithm that computes an approximately-value-equivalent, lossy compression of the environment which an agent may feasibly target in lieu of the true model.
We prove an information-theoretic, Bayesian regret bound for our algorithm that holds for any finite-horizon, episodic sequential decision-making problem.
- Score: 21.931580762349096
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The quintessential model-based reinforcement-learning agent iteratively
refines its estimates or prior beliefs about the true underlying model of the
environment. Recent empirical successes in model-based reinforcement learning
with function approximation, however, eschew the true model in favor of a
surrogate that, while ignoring various facets of the environment, still
facilitates effective planning over behaviors. Recently formalized as the value
equivalence principle, this algorithmic technique is perhaps unavoidable as
real-world reinforcement learning demands consideration of a simple,
computationally-bounded agent interacting with an overwhelmingly complex
environment, whose underlying dynamics likely exceed the agent's capacity for
representation. In this work, we consider the scenario where agent limitations
may entirely preclude identifying an exactly value-equivalent model,
immediately giving rise to a trade-off between identifying a model that is
simple enough to learn while only incurring bounded sub-optimality. To address
this problem, we introduce an algorithm that, using rate-distortion theory,
iteratively computes an approximately-value-equivalent, lossy compression of
the environment which an agent may feasibly target in lieu of the true model.
We prove an information-theoretic, Bayesian regret bound for our algorithm that
holds for any finite-horizon, episodic sequential decision-making problem.
Crucially, our regret bound can be expressed in one of two possible forms,
providing a performance guarantee for finding either the simplest model that
achieves a desired sub-optimality gap or, alternatively, the best model given a
limit on agent capacity.
Related papers
- When Demonstrations Meet Generative World Models: A Maximum Likelihood
Framework for Offline Inverse Reinforcement Learning [62.00672284480755]
This paper aims to recover the structure of rewards and environment dynamics that underlie observed actions in a fixed, finite set of demonstrations from an expert agent.
Accurate models of expertise in executing a task has applications in safety-sensitive applications such as clinical decision making and autonomous driving.
arXiv Detail & Related papers (2023-02-15T04:14:20Z) - General multi-fidelity surrogate models: Framework and active learning
strategies for efficient rare event simulation [1.708673732699217]
Estimating the probability of failure for complex real-world systems is often prohibitively expensive.
This paper presents a robust multi-fidelity surrogate modeling strategy.
It is shown to be highly accurate while drastically reducing the number of high-fidelity model calls.
arXiv Detail & Related papers (2022-12-07T00:03:21Z) - When to Update Your Model: Constrained Model-based Reinforcement
Learning [50.74369835934703]
We propose a novel and general theoretical scheme for a non-decreasing performance guarantee of model-based RL (MBRL)
Our follow-up derived bounds reveal the relationship between model shifts and performance improvement.
A further example demonstrates that learning models from a dynamically-varying number of explorations benefit the eventual returns.
arXiv Detail & Related papers (2022-10-15T17:57:43Z) - Between Rate-Distortion Theory & Value Equivalence in Model-Based
Reinforcement Learning [21.931580762349096]
We introduce an algorithm for synthesizing simple and useful approximations of the environment from which an agent might still recover near-optimal behavior.
We recognize the information-theoretic nature of this lossy environment compression problem and use the appropriate tools of rate-distortion theory to make mathematically precise how value equivalence can lend tractability to otherwise intractable sequential decision-making problems.
arXiv Detail & Related papers (2022-06-04T17:09:46Z) - Control-Oriented Model-Based Reinforcement Learning with Implicit
Differentiation [11.219641045667055]
We propose an end-to-end approach for model learning which directly optimize the expected returns using implicit differentiation.
We provide theoretical and empirical evidence highlighting the benefits of our approach in the model misspecification regime compared to likelihood-based methods.
arXiv Detail & Related papers (2021-06-06T23:15:49Z) - A bandit-learning approach to multifidelity approximation [7.960229223744695]
Multifidelity approximation is an important technique in scientific computation and simulation.
We introduce a bandit-learning approach for leveraging data of varying fidelities to achieve precise estimates.
arXiv Detail & Related papers (2021-03-29T05:29:35Z) - COMBO: Conservative Offline Model-Based Policy Optimization [120.55713363569845]
Uncertainty estimation with complex models, such as deep neural networks, can be difficult and unreliable.
We develop a new model-based offline RL algorithm, COMBO, that regularizes the value function on out-of-support state-actions.
We find that COMBO consistently performs as well or better as compared to prior offline model-free and model-based methods.
arXiv Detail & Related papers (2021-02-16T18:50:32Z) - Model-based Policy Optimization with Unsupervised Model Adaptation [37.09948645461043]
We investigate how to bridge the gap between real and simulated data due to inaccurate model estimation for better policy optimization.
We propose a novel model-based reinforcement learning framework AMPO, which introduces unsupervised model adaptation.
Our approach achieves state-of-the-art performance in terms of sample efficiency on a range of continuous control benchmark tasks.
arXiv Detail & Related papers (2020-10-19T14:19:42Z) - Goal-directed Generation of Discrete Structures with Conditional
Generative Models [85.51463588099556]
We introduce a novel approach to directly optimize a reinforcement learning objective, maximizing an expected reward.
We test our methodology on two tasks: generating molecules with user-defined properties and identifying short python expressions which evaluate to a given target value.
arXiv Detail & Related papers (2020-10-05T20:03:13Z) - On the model-based stochastic value gradient for continuous
reinforcement learning [50.085645237597056]
We show that simple model-based agents can outperform state-of-the-art model-free agents in terms of both sample-efficiency and final reward.
Our findings suggest that model-based policy evaluation deserves closer attention.
arXiv Detail & Related papers (2020-08-28T17:58:29Z) - Control as Hybrid Inference [62.997667081978825]
We present an implementation of CHI which naturally mediates the balance between iterative and amortised inference.
We verify the scalability of our algorithm on a continuous control benchmark, demonstrating that it outperforms strong model-free and model-based baselines.
arXiv Detail & Related papers (2020-07-11T19:44:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.