Blending MPC & Value Function Approximation for Efficient Reinforcement
Learning
- URL: http://arxiv.org/abs/2012.05909v2
- Date: Tue, 13 Apr 2021 18:07:49 GMT
- Title: Blending MPC & Value Function Approximation for Efficient Reinforcement
Learning
- Authors: Mohak Bhardwaj, Sanjiban Choudhury, Byron Boots
- Abstract summary: Model-Predictive Control (MPC) is a powerful tool for controlling complex, real-world systems.
We present a framework for improving on MPC with model-free reinforcement learning (RL)
We show that our approach can obtain performance comparable with MPC with access to true dynamics.
- Score: 42.429730406277315
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Model-Predictive Control (MPC) is a powerful tool for controlling complex,
real-world systems that uses a model to make predictions about future behavior.
For each state encountered, MPC solves an online optimization problem to choose
a control action that will minimize future cost. This is a surprisingly
effective strategy, but real-time performance requirements warrant the use of
simple models. If the model is not sufficiently accurate, then the resulting
controller can be biased, limiting performance. We present a framework for
improving on MPC with model-free reinforcement learning (RL). The key insight
is to view MPC as constructing a series of local Q-function approximations. We
show that by using a parameter $\lambda$, similar to the trace decay parameter
in TD($\lambda$), we can systematically trade-off learned value estimates
against the local Q-function approximations. We present a theoretical analysis
that shows how error from inaccurate models in MPC and value function
estimation in RL can be balanced. We further propose an algorithm that changes
$\lambda$ over time to reduce the dependence on MPC as our estimates of the
value function improve, and test the efficacy our approach on challenging
high-dimensional manipulation tasks with biased models in simulation. We
demonstrate that our approach can obtain performance comparable with MPC with
access to true dynamics even under severe model bias and is more sample
efficient as compared to model-free RL.
Related papers
- Deep Model Predictive Optimization [21.22047409735362]
A major challenge in robotics is to design robust policies which enable complex and agile behaviors in the real world.
We propose Deep Model Predictive Optimization (DMPO), which learns the inner-loop of an MPC optimization algorithm directly via experience.
DMPO can outperform the best MPC algorithm by up to 27% with fewer samples and an end-to-end policy trained with MFRL by 19%.
arXiv Detail & Related papers (2023-10-06T21:11:52Z) - Value function estimation using conditional diffusion models for control [62.27184818047923]
We propose a simple algorithm called Diffused Value Function (DVF)
It learns a joint multi-step model of the environment-robot interaction dynamics using a diffusion model.
We show how DVF can be used to efficiently capture the state visitation measure for multiple controllers.
arXiv Detail & Related papers (2023-06-09T18:40:55Z) - Predictable MDP Abstraction for Unsupervised Model-Based RL [93.91375268580806]
We propose predictable MDP abstraction (PMA)
Instead of training a predictive model on the original MDP, we train a model on a transformed MDP with a learned action space.
We theoretically analyze PMA and empirically demonstrate that PMA leads to significant improvements over prior unsupervised model-based RL approaches.
arXiv Detail & Related papers (2023-02-08T07:37:51Z) - Learning to Optimize in Model Predictive Control [36.82905770866734]
Sampling-based Model Predictive Control (MPC) is a flexible control framework that can reason about non-smooth dynamics and cost functions.
We show that this can be particularly useful in sampling-based MPC, where we often wish to minimize the number of samples.
We show that we can contend with this noise by learning how to update the control distribution more effectively and make better use of the few samples that we have.
arXiv Detail & Related papers (2022-12-05T21:20:10Z) - Value Gradient weighted Model-Based Reinforcement Learning [28.366157882991565]
Model-based reinforcement learning (MBRL) is a sample efficient technique to obtain control policies.
VaGraM is a novel method for value-aware model learning.
arXiv Detail & Related papers (2022-04-04T13:28:31Z) - On Effective Scheduling of Model-based Reinforcement Learning [53.027698625496015]
We propose a framework named AutoMBPO to automatically schedule the real data ratio.
In this paper, we first theoretically analyze the role of real data in policy training, which suggests that gradually increasing the ratio of real data yields better performance.
arXiv Detail & Related papers (2021-11-16T15:24:59Z) - Evaluating model-based planning and planner amortization for continuous
control [79.49319308600228]
We take a hybrid approach, combining model predictive control (MPC) with a learned model and model-free policy learning.
We find that well-tuned model-free agents are strong baselines even for high DoF control problems.
We show that it is possible to distil a model-based planner into a policy that amortizes the planning without any loss of performance.
arXiv Detail & Related papers (2021-10-07T12:00:40Z) - Information Theoretic Model Predictive Q-Learning [64.74041985237105]
We present a novel theoretical connection between information theoretic MPC and entropy regularized RL.
We develop a Q-learning algorithm that can leverage biased models.
arXiv Detail & Related papers (2019-12-31T00:29:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.