Sample-Efficient Reinforcement Learning via Conservative Model-Based
Actor-Critic
- URL: http://arxiv.org/abs/2112.10504v1
- Date: Thu, 16 Dec 2021 15:33:11 GMT
- Title: Sample-Efficient Reinforcement Learning via Conservative Model-Based
Actor-Critic
- Authors: Zhihai Wang, Jie Wang, Qi Zhou, Bin Li, Houqiang Li
- Abstract summary: Model-based reinforcement learning algorithms are more sample efficient than their model-free counterparts.
We propose a novel approach that achieves high sample efficiency without the strong reliance on accurate learned models.
We show that CMBAC significantly outperforms state-of-the-art approaches in terms of sample efficiency on several challenging tasks.
- Score: 67.00475077281212
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Model-based reinforcement learning algorithms, which aim to learn a model of
the environment to make decisions, are more sample efficient than their
model-free counterparts. The sample efficiency of model-based approaches relies
on whether the model can well approximate the environment. However, learning an
accurate model is challenging, especially in complex and noisy environments. To
tackle this problem, we propose the conservative model-based actor-critic
(CMBAC), a novel approach that achieves high sample efficiency without the
strong reliance on accurate learned models. Specifically, CMBAC learns multiple
estimates of the Q-value function from a set of inaccurate models and uses the
average of the bottom-k estimates -- a conservative estimate -- to optimize the
policy. An appealing feature of CMBAC is that the conservative estimates
effectively encourage the agent to avoid unreliable "promising actions" --
whose values are high in only a small fraction of the models. Experiments
demonstrate that CMBAC significantly outperforms state-of-the-art approaches in
terms of sample efficiency on several challenging tasks, and the proposed
method is more robust than previous methods in noisy environments.
Related papers
- Towards Stable Machine Learning Model Retraining via Slowly Varying Sequences [6.067007470552307]
We propose a methodology for finding sequences of machine learning models that are stable across retraining iterations.
We develop a mixed-integer optimization formulation that is guaranteed to recover optimal models.
Our method shows stronger stability than greedily trained models with a small, controllable sacrifice in predictive power.
arXiv Detail & Related papers (2024-03-28T22:45:38Z) - QualEval: Qualitative Evaluation for Model Improvement [82.73561470966658]
We propose QualEval, which augments quantitative scalar metrics with automated qualitative evaluation as a vehicle for model improvement.
QualEval uses a powerful LLM reasoner and our novel flexible linear programming solver to generate human-readable insights.
We demonstrate that leveraging its insights, for example, improves the absolute performance of the Llama 2 model by up to 15% points relative.
arXiv Detail & Related papers (2023-11-06T00:21:44Z) - Model predictive control-based value estimation for efficient reinforcement learning [6.8237783245324035]
We design an improved reinforcement learning method based on model predictive control that models the environment through a data-driven approach.
Based on the learned environment model, it performs multi-step prediction to estimate the value function and optimize the policy.
The method demonstrates higher learning efficiency, faster convergent speed of strategies tending to the local optimal value, and less sample capacity space required by experience replay buffers.
arXiv Detail & Related papers (2023-10-25T13:55:14Z) - COPlanner: Plan to Roll Out Conservatively but to Explore Optimistically
for Model-Based RL [50.385005413810084]
Dyna-style model-based reinforcement learning contains two phases: model rollouts to generate sample for policy learning and real environment exploration.
$textttCOPlanner$ is a planning-driven framework for model-based methods to address the inaccurately learned dynamics model problem.
arXiv Detail & Related papers (2023-10-11T06:10:07Z) - Optimizing Hyperparameters with Conformal Quantile Regression [7.316604052864345]
We propose to leverage conformalized quantile regression which makes minimal assumptions about the observation noise.
This translates to quicker HPO convergence on empirical benchmarks.
arXiv Detail & Related papers (2023-05-05T15:33:39Z) - Evaluating model-based planning and planner amortization for continuous
control [79.49319308600228]
We take a hybrid approach, combining model predictive control (MPC) with a learned model and model-free policy learning.
We find that well-tuned model-free agents are strong baselines even for high DoF control problems.
We show that it is possible to distil a model-based planner into a policy that amortizes the planning without any loss of performance.
arXiv Detail & Related papers (2021-10-07T12:00:40Z) - Sample Efficient Reinforcement Learning via Model-Ensemble Exploration
and Exploitation [3.728946517493471]
MEEE is a model-ensemble method that consists of optimistic exploration and weighted exploitation.
Our approach outperforms other model-free and model-based state-of-the-art methods, especially in sample complexity.
arXiv Detail & Related papers (2021-07-05T07:18:20Z) - On the model-based stochastic value gradient for continuous
reinforcement learning [50.085645237597056]
We show that simple model-based agents can outperform state-of-the-art model-free agents in terms of both sample-efficiency and final reward.
Our findings suggest that model-based policy evaluation deserves closer attention.
arXiv Detail & Related papers (2020-08-28T17:58:29Z) - Model-Augmented Actor-Critic: Backpropagating through Paths [81.86992776864729]
Current model-based reinforcement learning approaches use the model simply as a learned black-box simulator.
We show how to make more effective use of the model by exploiting its differentiability.
arXiv Detail & Related papers (2020-05-16T19:18:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.