Maximum Entropy Model Rollouts: Fast Model Based Policy Optimization
without Compounding Errors
- URL: http://arxiv.org/abs/2006.04802v2
- Date: Mon, 29 Jun 2020 00:07:27 GMT
- Title: Maximum Entropy Model Rollouts: Fast Model Based Policy Optimization
without Compounding Errors
- Authors: Chi Zhang, Sanmukh Rao Kuppannagari, Viktor K Prasanna
- Abstract summary: We propose a Dyna-style model-based reinforcement learning algorithm, which we called Maximum Entropy Model Rollouts (MEMR)
To eliminate the compounding errors, we only use our model to generate single-step rollouts.
- Score: 10.906666680425754
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Model usage is the central challenge of model-based reinforcement learning.
Although dynamics model based on deep neural networks provide good
generalization for single step prediction, such ability is over exploited when
it is used to predict long horizon trajectories due to compounding errors. In
this work, we propose a Dyna-style model-based reinforcement learning
algorithm, which we called Maximum Entropy Model Rollouts (MEMR). To eliminate
the compounding errors, we only use our model to generate single-step rollouts.
Furthermore, we propose to generate \emph{diverse} model rollouts by
non-uniform sampling of the environment states such that the entropy of the
model rollouts is maximized. We mathematically derived the maximum entropy
sampling criteria for one data case under Gaussian prior. To accomplish this
criteria, we propose to utilize a prioritized experience replay. Our
preliminary experiments in challenging locomotion benchmarks show that our
approach achieves the same sample efficiency of the best model-based
algorithms, matches the asymptotic performance of the best model-free
algorithms, and significantly reduces the computation requirements of other
model-based methods.
Related papers
- Supervised Score-Based Modeling by Gradient Boosting [49.556736252628745]
We propose a Supervised Score-based Model (SSM) which can be viewed as a gradient boosting algorithm combining score matching.
We provide a theoretical analysis of learning and sampling for SSM to balance inference time and prediction accuracy.
Our model outperforms existing models in both accuracy and inference time.
arXiv Detail & Related papers (2024-11-02T07:06:53Z) - Towards Learning Stochastic Population Models by Gradient Descent [0.0]
We show that simultaneous estimation of parameters and structure poses major challenges for optimization procedures.
We demonstrate accurate estimation of models but find that enforcing the inference of parsimonious, interpretable models drastically increases the difficulty.
arXiv Detail & Related papers (2024-04-10T14:38:58Z) - When to Update Your Model: Constrained Model-based Reinforcement
Learning [50.74369835934703]
We propose a novel and general theoretical scheme for a non-decreasing performance guarantee of model-based RL (MBRL)
Our follow-up derived bounds reveal the relationship between model shifts and performance improvement.
A further example demonstrates that learning models from a dynamically-varying number of explorations benefit the eventual returns.
arXiv Detail & Related papers (2022-10-15T17:57:43Z) - Modeling the Second Player in Distributionally Robust Optimization [90.25995710696425]
We argue for the use of neural generative models to characterize the worst-case distribution.
This approach poses a number of implementation and optimization challenges.
We find that the proposed approach yields models that are more robust than comparable baselines.
arXiv Detail & Related papers (2021-03-18T14:26:26Z) - On Statistical Efficiency in Learning [37.08000833961712]
We address the challenge of model selection to strike a balance between model fitting and model complexity.
We propose an online algorithm that sequentially expands the model complexity to enhance selection stability and reduce cost.
Experimental studies show that the proposed method has desirable predictive power and significantly less computational cost than some popular methods.
arXiv Detail & Related papers (2020-12-24T16:08:29Z) - Model-based Policy Optimization with Unsupervised Model Adaptation [37.09948645461043]
We investigate how to bridge the gap between real and simulated data due to inaccurate model estimation for better policy optimization.
We propose a novel model-based reinforcement learning framework AMPO, which introduces unsupervised model adaptation.
Our approach achieves state-of-the-art performance in terms of sample efficiency on a range of continuous control benchmark tasks.
arXiv Detail & Related papers (2020-10-19T14:19:42Z) - Goal-directed Generation of Discrete Structures with Conditional
Generative Models [85.51463588099556]
We introduce a novel approach to directly optimize a reinforcement learning objective, maximizing an expected reward.
We test our methodology on two tasks: generating molecules with user-defined properties and identifying short python expressions which evaluate to a given target value.
arXiv Detail & Related papers (2020-10-05T20:03:13Z) - Control as Hybrid Inference [62.997667081978825]
We present an implementation of CHI which naturally mediates the balance between iterative and amortised inference.
We verify the scalability of our algorithm on a continuous control benchmark, demonstrating that it outperforms strong model-free and model-based baselines.
arXiv Detail & Related papers (2020-07-11T19:44:09Z) - Active Sampling for Min-Max Fairness [28.420886416425077]
We propose simple active sampling and reweighting strategies for optimizing min-max fairness.
The ease of implementation and the generality of our robust formulation make it an attractive option for improving model performance on disadvantaged groups.
For convex learning problems, such as linear or logistic regression, we provide a fine-grained analysis, proving the rate of convergence to a min-max fair solution.
arXiv Detail & Related papers (2020-06-11T23:57:55Z) - Model-Augmented Actor-Critic: Backpropagating through Paths [81.86992776864729]
Current model-based reinforcement learning approaches use the model simply as a learned black-box simulator.
We show how to make more effective use of the model by exploiting its differentiability.
arXiv Detail & Related papers (2020-05-16T19:18:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.