When to Update Your Model: Constrained Model-based Reinforcement
Learning
- URL: http://arxiv.org/abs/2210.08349v4
- Date: Wed, 8 Nov 2023 07:17:16 GMT
- Title: When to Update Your Model: Constrained Model-based Reinforcement
Learning
- Authors: Tianying Ji, Yu Luo, Fuchun Sun, Mingxuan Jing, Fengxiang He, Wenbing
Huang
- Abstract summary: We propose a novel and general theoretical scheme for a non-decreasing performance guarantee of model-based RL (MBRL)
Our follow-up derived bounds reveal the relationship between model shifts and performance improvement.
A further example demonstrates that learning models from a dynamically-varying number of explorations benefit the eventual returns.
- Score: 50.74369835934703
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Designing and analyzing model-based RL (MBRL) algorithms with guaranteed
monotonic improvement has been challenging, mainly due to the interdependence
between policy optimization and model learning. Existing discrepancy bounds
generally ignore the impacts of model shifts, and their corresponding
algorithms are prone to degrade performance by drastic model updating. In this
work, we first propose a novel and general theoretical scheme for a
non-decreasing performance guarantee of MBRL. Our follow-up derived bounds
reveal the relationship between model shifts and performance improvement. These
discoveries encourage us to formulate a constrained lower-bound optimization
problem to permit the monotonicity of MBRL. A further example demonstrates that
learning models from a dynamically-varying number of explorations benefit the
eventual returns. Motivated by these analyses, we design a simple but effective
algorithm CMLO (Constrained Model-shift Lower-bound Optimization), by
introducing an event-triggered mechanism that flexibly determines when to
update the model. Experiments show that CMLO surpasses other state-of-the-art
methods and produces a boost when various policy optimization methods are
employed.
Related papers
- Task-optimal data-driven surrogate models for eNMPC via differentiable simulation and optimization [42.72938925647165]
We present a method for end-to-end learning of Koopman surrogate models for optimal performance in a specific control task.
We use a training algorithm that exploits the potential differentiability of environments based on mechanistic simulation models to aid the policy optimization.
arXiv Detail & Related papers (2024-03-21T14:28:43Z) - How to Fine-tune the Model: Unified Model Shift and Model Bias Policy
Optimization [13.440645736306267]
This paper develops an algorithm for model-based reinforcement learning.
It unifies model shift and model bias and then formulates a fine-tuning process.
It achieves state-of-the-art performance on several challenging benchmark tasks.
arXiv Detail & Related papers (2023-09-22T07:27:32Z) - Latent Variable Representation for Reinforcement Learning [131.03944557979725]
It remains unclear theoretically and empirically how latent variable models may facilitate learning, planning, and exploration to improve the sample efficiency of model-based reinforcement learning.
We provide a representation view of the latent variable models for state-action value functions, which allows both tractable variational learning algorithm and effective implementation of the optimism/pessimism principle.
In particular, we propose a computationally efficient planning algorithm with UCB exploration by incorporating kernel embeddings of latent variable models.
arXiv Detail & Related papers (2022-12-17T00:26:31Z) - Model-Invariant State Abstractions for Model-Based Reinforcement
Learning [54.616645151708994]
We introduce a new type of state abstraction called textitmodel-invariance.
This allows for generalization to novel combinations of unseen values of state variables.
We prove that an optimal policy can be learned over this model-invariance state abstraction.
arXiv Detail & Related papers (2021-02-19T10:37:54Z) - COMBO: Conservative Offline Model-Based Policy Optimization [120.55713363569845]
Uncertainty estimation with complex models, such as deep neural networks, can be difficult and unreliable.
We develop a new model-based offline RL algorithm, COMBO, that regularizes the value function on out-of-support state-actions.
We find that COMBO consistently performs as well or better as compared to prior offline model-free and model-based methods.
arXiv Detail & Related papers (2021-02-16T18:50:32Z) - Model-based Policy Optimization with Unsupervised Model Adaptation [37.09948645461043]
We investigate how to bridge the gap between real and simulated data due to inaccurate model estimation for better policy optimization.
We propose a novel model-based reinforcement learning framework AMPO, which introduces unsupervised model adaptation.
Our approach achieves state-of-the-art performance in terms of sample efficiency on a range of continuous control benchmark tasks.
arXiv Detail & Related papers (2020-10-19T14:19:42Z) - Control as Hybrid Inference [62.997667081978825]
We present an implementation of CHI which naturally mediates the balance between iterative and amortised inference.
We verify the scalability of our algorithm on a continuous control benchmark, demonstrating that it outperforms strong model-free and model-based baselines.
arXiv Detail & Related papers (2020-07-11T19:44:09Z) - Variational Model-based Policy Optimization [34.80171122943031]
Model-based reinforcement learning (RL) algorithms allow us to combine model-generated data with those collected from interaction with the real system in order to alleviate the data efficiency problem in RL.
We propose an objective function as a variational lower-bound of a log-likelihood of a log-likelihood to jointly learn and improve model and policy.
Our experiments on a number of continuous control tasks show that despite being more complex, our model-based (E-step) algorithm, called emactoral model-based policy optimization (VMBPO), is more sample-efficient and
arXiv Detail & Related papers (2020-06-09T18:30:15Z) - Model-Augmented Actor-Critic: Backpropagating through Paths [81.86992776864729]
Current model-based reinforcement learning approaches use the model simply as a learned black-box simulator.
We show how to make more effective use of the model by exploiting its differentiability.
arXiv Detail & Related papers (2020-05-16T19:18:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.